Domino – Exchange On Premises Migration Pt2: Wrestling the Outlook Client

In part 1 of my blog about Exchange on premises migration from Domino I talked about the challenges of working with Exchange for someone who is used to working with Domino.  If only that were all of it but now I want to talk about the issues around Outlook and other Exchange client options that require those of us used to working with Domino to change our thinking.

In Domino we are used to a mail file being on the server and regardless of whether we used Notes or a browser to see that client, the data is the same.  Unless we are using a local replica, but the use of that is very clear when we are in the database as it visibly shows “on Local” vs the server name.

We can also easily swap between the local and server replicas and even have both open at the same time.

In Outlook you only have the option to open a mailbox in either online or cached mode.

So let’s talk about cached mode because that’s the root of our migration pains. You must have a mail profile defined in Windows in order to run Outlook. The default setting for an Outlook profile is “cached mode” and that’s not very visible to the users. The screenshot below is what the status bar shows when you are accessing Outlook in cached mode.

connectedtoexchange

In cached mode there is a local OST file that syncs with your online Exchange mailbox.  It’s not something you can access or open outside of Outlook.

datafiles

Outlook will always use cached mode unless you modify the settings of the data file or the account to disable it.

cachedsettings

As you can see from the configuration settings below, a cached OST file is not the same as a local replica and it’s not designed to be.   The purpose of the cached mail is to make Outlook more efficient by not having everything accessed on the server.

cachedoffline

Why does this matter during a migration?  Most migration tools can claim to be able to migrate directly to the server mailboxes but in practice the speed of that migration is often unworkably slow.  If that can be achieved it’s by far the most efficient but Exchange has its own default configuration settings that work against you doing that including throttling of activity and filtering / scanning of messages.   Many / most migration tools do not expect to migrate “all data and attachments” which is what we are often asked to do.  If what we are aiming for is 100% data parity between a Domino mail file and an Exchange mailbox then migrating that 5GB, 10GB, 30GB volume directly to the server isn’t an option.  In addition if a migration partially runs to the server and then fails it’s almost impossible to backfill the missing data with incremental updates.  I have worked with several migration tools testing this and just didn’t have confidence in the data population directly on the server.

In sites where I have done migrations to on premises servers I’ve often found the speed of migration to the server mailbox on the same network makes migration impossible so instead I’ve migrated to a local OST file.  The difference between migrating a 10GB file to a local OST (about an hour) vs directly to Exchange (about 2.5 days) is painfully obvious. Putting more resources onto the migration machine didn’t significantly reduce the time and in fact each tool either crashed (running as a Domino task) or crashed (running as a Windows desktop task) when trying to write directly to Exchange.

An hour or two to migrate a Domino mail file to a local workstation OST isn’t bad though right?  That’s not bad at all, and if you open Outlook you will see all the messages, folders, calendar entries, etc, all displaying.  However that’s because you’re looking at cached mode. You’re literally looking at the data you just migrated.  Create a profile for the same user on another machine and the mail file will be empty because at this point there is no data in Exchange, only in the local OST.  Another thing to be aware of is that there is no equivalent of an All Documents view in Outlook so make sure your migration tool knows how to migrate unfoldered messages and your users know where to find them in their new mailbox.

Now to my next struggle.  Outlook will sync that data to Exchange.  It will take between 1 and 3 days to do so.  I have tried several tools to speed up the syncing and I would advise you not to bother.  The methods they use to populate the Exchange mailbox from a local OST file sidestep much of the standard Outlook sync behaviours meaning information is often missing or, in one case, it sent out calendar invites for every calendar entry it pushed to Exchange.  I tried five of those tools and none worked 100%. The risk of missing data or sending out duplicate calendar entries/emails was too high.  I opted in the end to stick with Outlook syncing.  Unlike Notes replication I can only sync one OST / Outlook mailbox at a time so it’s slow going unless I have multiple client machines. What is nice is that I can do incremental updates quickly once the initial multi-GB mailbox has synced to Exchange.

So my wrestling with the Outlook client boils down to

  • Create mail profiles that use cached mode
  • Migrate to a local OST
  • Use Outlook to sync that to Exchange
  • Pay attention to Outlook limits, like a maximum of 500 folders*
  • Be Patient

*On Domino mailboxes we migrated that pushed up against the folder or item limits we found Outlook would run out of system memory repeatedly when trying to sync.

One good way to test whether the Exchange data matches the Domino data is to use Outlook Web Access as that is accessing data directly on the Exchange server.  Except that’s not as identical to the server data as we are used to seeing with Verse or iNotes.  In fact OWA too decides to show you through a browser what it thinks you most need to see versus everything that’s there.  Often folders will claim to be empty and that there is no data when in fact that data is there but hasn’t been refreshed by Exchange (think Updall).  There are few things more scary in OWA than an empty folder and a link suggesting you refresh from the server.  It just doesn’t instill confidence in the user experience.

Finally we have Outlook mobile or even using the native iOS mail application.  That wasn’t a separate configuration and unless you configure Exchange otherwise the default is that mobile access will be granted to everyone.   In one instance a couple of weeks ago, mobile access suddenly stopped working for all users who hadn’t already set up their devices.  When they tried to log in they got invalid name or password.  I eventually tracked that down to a Windows update that had changed permissions in Active Directory that Exchange needed set.  You can see reference to the issue here, and slightly differently here, although note it seems to have been an issue since Exchange 2010 and still with Exchange 2016.  I was surprised it was broken by a Windows update but it was.

I know (and have used) many workarounds for the issues I run into but that’s not for here.  Coming from a Domino and Notes background I believe we’ve been conditioned to think in a certain way about mailfile structure, server performance, local data, and the user experience, and expecting to duplicate that exactly is always going to be troublesome.

#DominoForever

 

 

 

 

 

 

Domino – Exchange On Premises Migration Pt2: Wrestling the Outlook Client

In part 1 of my blog about Exchange on premises migration from Domino I talked about the challenges of working with Exchange for someone who is used to working with Domino.  If only that were all of it but now I want to talk about the issues around Outlook and other Exchange client options that require those of us used to working with Domino to change our thinking.

In Domino we are used to a mail file being on the server and regardless of whether we used Notes or a browser to see that client, the data is the same.  Unless we are using a local replica, but the use of that is very clear when we are in the database as it visibly shows “on Local” vs the server name.

We can also easily swap between the local and server replicas and even have both open at the same time.

In Outlook you only have the option to open a mailbox in either online or cached mode.

So let’s talk about cached mode because that’s the root of our migration pains. You must have a mail profile defined in Windows in order to run Outlook. The default setting for an Outlook profile is “cached mode” and that’s not very visible to the users. The screenshot below is what the status bar shows when you are accessing Outlook in cached mode.

connectedtoexchange

In cached mode there is a local OST file that syncs with your online Exchange mailbox.  It’s not something you can access or open outside of Outlook.

datafiles

Outlook will always use cached mode unless you modify the settings of the data file or the account to disable it.

cachedsettings

As you can see from the configuration settings below, a cached OST file is not the same as a local replica and it’s not designed to be.   The purpose of the cached mail is to make Outlook more efficient by not having everything accessed on the server.

cachedoffline

Why does this matter during a migration?  Most migration tools can claim to be able to migrate directly to the server mailboxes but in practice the speed of that migration is often unworkably slow.  If that can be achieved it’s by far the most efficient but Exchange has its own default configuration settings that work against you doing that including throttling of activity and filtering / scanning of messages.   Many / most migration tools do not expect to migrate “all data and attachments” which is what we are often asked to do.  If what we are aiming for is 100% data parity between a Domino mail file and an Exchange mailbox then migrating that 5GB, 10GB, 30GB volume directly to the server isn’t an option.  In addition if a migration partially runs to the server and then fails it’s almost impossible to backfill the missing data with incremental updates.  I have worked with several migration tools testing this and just didn’t have confidence in the data population directly on the server.

In sites where I have done migrations to on premises servers I’ve often found the speed of migration to the server mailbox on the same network makes migration impossible so instead I’ve migrated to a local OST file.  The difference between migrating a 10GB file to a local OST (about an hour) vs directly to Exchange (about 2.5 days) is painfully obvious. Putting more resources onto the migration machine didn’t significantly reduce the time and in fact each tool either crashed (running as a Domino task) or crashed (running as a Windows desktop task) when trying to write directly to Exchange.

An hour or two to migrate a Domino mail file to a local workstation OST isn’t bad though right?  That’s not bad at all, and if you open Outlook you will see all the messages, folders, calendar entries, etc, all displaying.  However that’s because you’re looking at cached mode. You’re literally looking at the data you just migrated.  Create a profile for the same user on another machine and the mail file will be empty because at this point there is no data in Exchange, only in the local OST.  Another thing to be aware of is that there is no equivalent of an All Documents view in Outlook so make sure your migration tool knows how to migrate unfoldered messages and your users know where to find them in their new mailbox.

Now to my next struggle.  Outlook will sync that data to Exchange.  It will take between 1 and 3 days to do so.  I have tried several tools to speed up the syncing and I would advise you not to bother.  The methods they use to populate the Exchange mailbox from a local OST file sidestep much of the standard Outlook sync behaviours meaning information is often missing or, in one case, it sent out calendar invites for every calendar entry it pushed to Exchange.  I tried five of those tools and none worked 100%. The risk of missing data or sending out duplicate calendar entries/emails was too high.  I opted in the end to stick with Outlook syncing.  Unlike Notes replication I can only sync one OST / Outlook mailbox at a time so it’s slow going unless I have multiple client machines. What is nice is that I can do incremental updates quickly once the initial multi-GB mailbox has synced to Exchange.

So my wrestling with the Outlook client boils down to

  • Create mail profiles that use cached mode
  • Migrate to a local OST
  • Use Outlook to sync that to Exchange
  • Pay attention to Outlook limits, like a maximum of 500 folders*
  • Be Patient

*On Domino mailboxes we migrated that pushed up against the folder or item limits we found Outlook would run out of system memory repeatedly when trying to sync.

One good way to test whether the Exchange data matches the Domino data is to use Outlook Web Access as that is accessing data directly on the Exchange server.  Except that’s not as identical to the server data as we are used to seeing with Verse or iNotes.  In fact OWA too decides to show you through a browser what it thinks you most need to see versus everything that’s there.  Often folders will claim to be empty and that there is no data when in fact that data is there but hasn’t been refreshed by Exchange (think Updall).  There are few things more scary in OWA than an empty folder and a link suggesting you refresh from the server.  It just doesn’t instill confidence in the user experience.

Finally we have Outlook mobile or even using the native iOS mail application.  That wasn’t a separate configuration and unless you configure Exchange otherwise the default is that mobile access will be granted to everyone.   In one instance a couple of weeks ago, mobile access suddenly stopped working for all users who hadn’t already set up their devices.  When they tried to log in they got invalid name or password.  I eventually tracked that down to a Windows update that had changed permissions in Active Directory that Exchange needed set.  You can see reference to the issue here, and slightly differently here, although note it seems to have been an issue since Exchange 2010 and still with Exchange 2016.  I was surprised it was broken by a Windows update but it was.

I know (and have used) many workarounds for the issues I run into but that’s not for here.  Coming from a Domino and Notes background I believe we’ve been conditioned to think in a certain way about mailfile structure, server performance, local data, and the user experience, and expecting to duplicate that exactly is always going to be troublesome.

#DominoForever

 

 

 

 

 

 

Whooomf – All Change. HCL Buys The Shop…

According to this Press Release as of mid June 2019, HCL take ownership of a bunch of IBM products including Notes, Domino and Connections on premises. Right now and since late 2017 there has been a partnership with IBM on some of the products such as Notes, Domino, Traveler and Sametime* so this will take IBM out of the picture entirely. Here are my first “oh hey it’s 4am” thoughts on why that’s not entirely surprising or unwelcome news ..

HCL are all about leading with on premises, not cloud. The purchase of Connections is for on premises and there are thousands of customers who want to stay on premises. Every other provider is either entirely Cloud already or pushing their on premises customers towards it by starving their products of development and support (waves at Microsoft). *cough*revenue stream*cough*

HCL have shown in 2018 that they can innovate (Domino’s TCO offerings, Notes on the iPad, Node integration etc) , develop quickly and deliver on their promises. That’s been a refreshing change.

They must be pleased with the current partnership products to buy them and more outright.

When HCL started the partnership with IBM they brought on some of the best of the original IBM Collaboration development team and have continued to recruit at high speed. It was a smart move and one I hope they repeat across not just development but support and marketing too.

HCL already showed with “Places” that they have ideas for how collaboration tools could work (see this concept video https://youtu.be/CJNLmBkyvMo) and that’s good news for Connections customers who gain a large team and become part of a bigger collaboration story in a company that “gets it”.

Throughout 2018 HCL have made efforts to reach out repeatedly to customers and Business Partners, asking for our feedback and finding out what we want. From sponsoring user group events (and turning up in droves) around the world to hosting the factory tour in June at their offices in Chelmsford where we had two days of time with the developers and their upcoming technologies. I believe they have proven they understand what this community is about and how much value comes from listening and – yes – collaborating.

Tonight I am more optimistic for the future of these products and especially Connections than I have been in a while. HCL, to my experience, behave more like a software start up than anything else, moving fast, changing direction if necessary and always trying to lead by innovating. I hope many of the incredibly smart people at IBM (yes YOU) who have stood alongside these products for years do land at HCL if that’s what they want, it would be a huge loss if they don’t.

*HCL have confirmed that Sametime is included

Perfect10 – Building A Test Lab

In the 10th edition of my Perfect 10 webcast I explain how and why to build a test lab so you can get to deploying those products you downloaded today. You did download them right?

I was asked recently to share the slides for all these Perfect10 presentations and to be honest I hadn’t thought of it but it’s a good idea so I’ll be sharing them all this week.

Next up: Let’s Talk Domino v10 & Admin

11 mins

The story of Async in JavaScript

By Tim Davis – Director of Development

In my last post I talked about some Javascript concepts that will be useful when starting out with Node.js. This time I would like to talk about a potentially awkward part of JavaScript, i.e. asynchronous (async) operations. It is a bit of a long story, but it does have a happy ending.

So what is an asynchronous operation? Basically, it means a function or command that goes off and does its own thing while the rest of the code continues. It can be really useful or really annoying depending on the circumstances.

You may have used async code if you ever did AJAX calls to Domino web agents for lookups on web pages. The rest of the page loads while the lookup to the web agent comes back, and the user is happy because part of the page updates in the background. This is brilliant and is the classic use case for an async function.

This asynchronous behaviour is built into JavaScript through and through and you need to bear it in mind when you do any programming in Node.

So how does this async behaviour manifest itself? Lets look at an example. Suppose we have an asynchronous function that goes and does a lookup somewhere.

function doAsyncLookup() {
    ... do the lookup ...
    console.log("got data");
}

Then suppose we call this function from our main code, something like this:

console.log("start");
doAsyncLookup();
console.log("finish");

The output will be this:

start
finish
got data

By the time the lookup has completed it is too late, the code has moved on.

So how do you handle something like this? How can you possibly control your processes if things finish on their own?

The original way JavaScript async functions allowed you to handle this was with ‘callbacks’.

A callback is a function that the async function calls when it is finished. So instead of your code continuing after the async function is called, it continues inside the async function.

In our example a callback could look something like this:

function myCallback() {
    console.log("finish");
}

console.log("start");
doAsyncLookup( myCallback );

Now, the output would be this:

start
got data
finish

This is much better. Usually, the callback function receives the results of the async function as a parameter, so it can act on those results. So in examples of callbacks around the web, you might see something like:

function myCallback( myResults ) { 
    displayResults( myResults );
    console.log("finish"); 
} 

console.log("start"); 
doAsyncLookup( myCallback );

Often the callback function doesn’t need to be defined separately and is defined inside the async function itself as a sort of shorthand, so you will probably see a lot of examples looking like this:

console.log("start"); 
doAsyncLookup( function ( myResults ) { 
    displayResults( myResults ); 
    console.log("finish"); 
} );

This is all great, but the problem with callbacks is that you can easily get a confusing chain of callbacks within callbacks within callbacks if you want to do other asynchronous stuff with the results.

For example, suppose you do a lookup to get a list, then want to look up something else for each item in the list, and then maybe update a record based on that lookup, and finally write updates to the screen in a UI framework. In a JavaScript environment it is highly likely that each of these operations is asynchronous. You end up with a confusing chain of functions calling functions calling functions stretching off to the right, with all the attendant risk of coding errors that you would expect:

console.log("start"); 
doAsyncLookup( function ( myResults ) { 
    lookupItemDetails( myResults, function ( myDetails ) {
        saveDetails( myDetails, function ( saveStatus ) {
            updateUIDisplay( saveStatus, function ( updatedOK ) {
                console.log("finish");
            } );
        } );
    } );    
} );

It gets even worse if you add in error handling. We may have solved the async problem, but at the penalty of terrible code patterns.

Well, after putting up with this for a while the JavaScript world came up with a better version of callbacks, called Promises.

Promises are much more readable than callbacks and have some useful additional features. You pass the results of each function to the next with a ‘then’, and you can just add more ‘thens’ on the end if you have more async things to do.

Our nightmare-indented example above becomes something like this (here I am using the popular arrow notation for functions, see my previous article for more on them):

console.log("start"); 
doAsyncLookup()
.then( (myResults) => { return lookupItemDetails(myResults) } )
.then( (myDetails) => { return saveDetails(myDetails) } )
.then( (saveStatus) => { return updateUIDisplay(saveStatus) } )
.then( (updatedOK) => { console.log("finish") } );

This is much nicer. We don’t have all that ugly nesting.

Error handling is easier, too, because you can add a ‘catch’ to the end (or in the middle if you need) and it is all still much more clear and understandable:

console.log("start"); 
doAsyncLookup() 
.then( (myResults) => { return lookupItemDetails(myResults) } ) 
.then( (myDetails) => { return saveDetails(myDetails) } ) 
.then( (saveStatus) => { return updateUIDisplay(saveStatus) } ) 
.then( (updatedOK) => { console.log("finish") } )
.catch( (err) => { ... handle err ... } );

What is really neat is that you can create your own promises from existing callbacks, so you can tidy up any older messy async functions.

Promises also have some great added features which help with other async problems. For example, with ‘Promises.all’ you can force a list of async calls to be made in order.

So promises solved the callback nesting problem, but The Gods of JavaScript were still not satisfied.

Even with all these improvements, this code is still too ‘asynchronous’. It is still a chain of function after function and you have to pay attention to what is passed from one to the next, and remember that these are all asynchronous and be careful with your error handling.

Once upon a time, Willy Wonka gave us ‘square sweets that look round’, and so now TGOJ have given us ‘asynchronous functions that look synchronous’.

The latest and greatest advance in async handling is Async/Await.

All you need to do is make your main function ‘async’, and you can ‘await’ all your promises:

async function myAsyncStuff() {
    console.log("start"); 
    let myResults = await doAsyncLookup();
    let myDetails = await lookupItemDetails(myResults);
    let saveStatus = await saveDetails(myDetails);
    let updatedOK = await updateUIDisplay(saveStatus); 
    console.log("finish");
 }

How cool is this? Each asynchronous function runs in order, with no messy callbacks or chains of ‘thens’. They all sit in their own line of code just like regular functions. You can do things in between them, and you can wrap them in the usual try/catch error handling blocks. All the async handling stuff is gone, and this is done with just two little keywords.

Plus, the functions are all still promises, so you can do promise-y things with them if you want to, and you can create and ‘await’ your own promises to refactor and revive old callback code.

Async/Await is fully supported by Node.js, by popular UI frameworks like Angular and React, and by all modern browsers.

One of the biggest headaches in JavaScript development now has an elegant and usable solution and they all lived happily ever after.

I hope you enjoyed this little story. I told you it had a happy ending.