Deploying The AppDev Pack - An Admins Guide

Over here on the blog is Tim’s next entry talking about Node development and Domino, this time he explains how to use the early release of the app dev package to access (read and write) Domino data via Node.  However I don’t let developers do Domino admin so this is the bit where I explain how to configure Domino.  It’s all very easy and also all still early release so things may well change for GA.

First you will need to request the early release package which you can do here. What you’ll then get is a series of .tgz files including one entitled ‘domino-appdev-docs-site.tgz’ which, once extracted, gives you the index.html with instructions for installing.

You need to bear in mind that at least initially this only runs on Linux and Domino 10 and that Domino 10 on Linux 64bit officially means RHEL 7.4 or higher, or SLES 12. I went with RHEL 7.5.

Next we need to install  “Proton” so it can be run as a Domino server task which just means extracting the file ‘proton-addin.tgz’ into the /opt/ibm/domino/notes/latest/linux directory.   There is also some checking to make sure files are present and setting permissions but I don’t want to repeat the install instructions here as I would rather you refer to the latest official version of those.  Suffice it to say this is a 5 minute job at most.

Once the files are in place you can start and stop Proton as you would any other Domino task by doing “load Proton”, “tell Proton quit”, etc.

Then there are a few notes.ini settings you can choose to set including:

PROTON_SSL
= if you want the traffic between the Proton task and Node server to be encrypted (0/1).

PROTON_LISTEN_PORT= what port you want Proton to listen and be accessed by Node on (default 3002 ).

PROTON_LISTEN_ADDRESS= if you want Proton to listen on a specific address on your Domino server such as 127.0.0.1 which would require Node to be installed locally or 0.0.0.0 which will listen on any available address.

PROTON_AUTHENTICATION= how Proton handles authentication.  There are currently two options, client_cert or anonymous.  With authentication set to anonymous all requests that come from the Node application are done as an “anonymous” Domino user and your Domino application must allow Anonymous rights in the ACL.

The “client_cert” option requires the Node application to present a client certificate to the Proton task and for the Domino administrator to have already mapped that certificate to a specific person document by importing it.  Note that “client_cert” still means that all activity from that Node application will be done as a single identified user that must be in the ACL but does mean you need not allow anonymous access.  You can also use different identities in different Node applications.

Of course, what we all want is OAuth or an authentication model that allows individual user identities and this is hopefully why the product is still considered “early release”.   Both the “anonymous” and “client_cert” models are of limited use in production.

PROTON_KEYFILE
= the keyfile to use if you want PROTON to be communicating using SSL.  This isn’t releated to the Domino keyfile (although it could be) and since this is only for communication between your Node server and your Domino Proton task and never for client-facing traffic you could use entirely internally-generated keys since they only need to be shared with the Node server itself.

HCL have kindly provided scripts to generate all the certificates you need for your testing.

Finally we need to create a design catalog for Proton to use.  You can add individual databases to the design catalog and the first one you add actually creates the catalog.  There must be a catalog with at least one database in it for Proton to work at all.

The catalog contains an index of all the design elements in a Domino database so to add a new database to the catalog you would type:
load updall <database> -e

This isn’t dynamically maintained though, so if you change the design of a database you must update its entry in the catalog if you want to have new design elements added or updated, like this:
load updall <database path> -d

The purpose of the catalog is to speed up DQL’s access to the Domino data.  It’s not required that every database be catalogued but obviously doing so speeds up access and opens up things like view scanning using the <‘View or folder name’>.<Columnname> syntax.

So that’s my very quick admin guide to what I did that enabled Tim to do what he does. It’s very possible (even probable) that this entire blog will be obsolete when the GA release ships but hopefully this and Tim’s blog help you get started with the early release.

Adminlicious - My Favourite TCO Features in Domino 10

This is my presentation from Icon UK on Thursday 13th September.  There are lots of TCO features coming in Domino 10 that I’ve been working with and look forward to putting into production.  In this presentation I cover things like cluster symmetry, pre send mail checking, deletion logs and the newrelic statistics reporting.

Say it with me….

28 days until the Domino 10 release.

Folder Sync v10 #DOMINO10 #DOMINO2025

Next up in “cool admin things coming your way in v10” - folder syncing.  By selecting a folder on a cluster instance you can tell the server to keep that folder in sync across the entire cluster.   The folder can contain database files (NSFs and NTFs) but also NLOs.

Well that’s just dumb Gab.. NLOs are encrypted by the server ID so they can’t be synced across clustermates but a-ha! HCL are way ahead of you.  The NLO sync involves the source server decrypting the NLO before syncing it to the destination where it re-encrypts it before saving.

So no more making sure databases are replicated to every instance in a cluster.  No more creating mass replicas when adding a new server to the cluster or building a new server and no more worrying about missing NLOs if you copy over a DAOS enabled database and not its associated NLO files.

Genius.

File Repair v10 #Domino10 #Domino2025

If you follow this blog you know that v10 of Domino, Sametime, Verse on Premises, Traveler etc are all due out this year and I want to do some - very short - blog pieces talking about new features and what my use case would be for them.

So let’s start with FILE REPAIR (or whatever it’s going to be called)

The File Repair feature for Domino v10 is designed to auto repair any corrupted databases in a cluster. Should Domino detect a corruption on any of its databases that are clustered, it automatically removes the corrupted instance and pulls a new instance from a good cluster mate. Best of all this happens super fast, doesn’t use regular replication to repopulate, doesn’t require downtime and the cluster manager is fully aware of the database availability throughout.

I can think of plenty of instances where I have had a corrupted database that I can’t replace or fix without server downtime.  No more, and another good reason to cluster your servers.

 

V10 Roadmap: What’s new in Mail, Chat, and Verse on Premises?

Following on from our presentation at IBM Think, on Thursday May 24th I will be presenting a follow up webcast with Ram Krishnamurthy, Chief Architect, Notes, Designer and Xpages (HCL) and Andrew Manby, Director of Project Management (IBM).    On the webcast we will be showing the latest additions to Notes client mail, calendaring and Verse on Premises all of which comes from live code and will ship with v10 of each product later this year.

If you saw our presentation at Think there have been more additions and changes since then - the speed at which the products are being developed is something I haven’t seen before and there are some great new features and UI changes I think you will like.

We have a lot of content to cover in 45 minutes and Andrew will have some news you will want to hear too so go here to register for the webcast starting 10am EST, this Thursday the 24th of May.

If you want to stay up to date with all the changes happening to Domino, Sametime, Traveler, Verse and other products then keep an eye on the Destination Domino site where all the news and announcements appear first, and while you’re there why ot sign up for the newsletter.

As we all get ready for v10 of the products later this year I will be blogging more of my own preparation work on my blog at https://turtleblog.info and also populating a Youtube playlist called Perfect10 with a series of 10’ish minute videos to help you prepare.

Path To A Perfect 10 #perfect10

Here’s an intro to a new blog series I am starting to hopefully take us all through preparing for v10 of the ICS products due out later this year.  If you have suggestions for what you’d find useful outside of the things I mention in the video please let me know.  I’ll be using the hashtag #perfect10 for this which I’m sure isn’t used anywhere else and won’t lead to any confusion 🙂

Any feedback as to format is welcome.  Most blogs will have both slide content and video to match as I thought that the most engaging.  I am trying this as a YouTube channel.

 

Creative Ideas For Docker (and Domino)

In an earlier post I mentioned that I have been working on new technology projects since the end of last year and I wanted to share here what I’m doing as well as plan to keep you updated on my progress if only to keep pressure on myself.   I have been working with, and speaking about, Docker and containers for the past year and it was good news to hear that IBM will now support Docker as a platform for Domino (as of 9.0.1 FP10). http://www-01.ibm.com/support/docview.wss?uid=swg22013200

Good news, but only a first start.  Domino still needs to be installed and run in its entirety inside a container although the data would / could be mapped outside.  Ideally in a microservices model Domino would be componentised and we could have separate containers for the router task, for amgr, for updall, etc, so we could build a server to the exact scale we needed.  However that is maybe in the future, right now there’s a lot we can do and two projects in particular I’m working on to solve existing issues.

Issue 1: A DR-Only Domino Cluster Mate

It’s a common request for me to design a Domino infrastructure that includes clustered servers but with at least one server at a remote location, never to be used unless in a DR situation.  The problem with that in a Domino world is also Domino’s most powerful clustering feature, there is an assumption that if a server is in a cluster then it is equally accessible to the users as any other server in the cluster and, if it’s not busy and the server the user tries to connect to is busy, the user will be pushed to the not-busy server.   That’s fine if all the cluster servers are on equal bandwidth or equally accessible, but a remote DR-only server that should only be accessed in emergency situations should not be part of that failover process.   It’s a double edged sword - we want the DR server to be part of the cluster so it is kept up to date in real time and so users can fail over to it without any configuration changes or action on their part.  We don’t want users failing over to it until we say so.

I tend to tackle this by designing the DR server to have a server_availability_threshold=100 which marks it as “busy” and prevents and client failover if the other servers are online.  It works ‘ish’ but someone has to disable that setting to ensure all users failover neatly when needed and it isn’t unusual to have a few users end up on there regardless.

So what can Docker do for me?

I don’t see that much value in a standard Domino image for docker in my world.  When I build a Domino server it tends to have a unique configuration and set of tasks so although it would be nice, my goal in deploying Domino under docker is very different. It is to create identical containers running identical versions of Domino with identical names e.g Brass/Turtle and Brass/Turtle. Both containers will point to external data stores (either in another container or a file system mount). Both will be part of a larger Domino cluster.  Both will have the same ip address.  Obviously both can’t be online at the same time so one will be online and operating as part of the cluster and only if that server or container goes down would the other container - at another location - activate. In that model we have passive / active DR on a Domino server that participates fully in workload balancing and failover.  I don’t have to worry about tuning the Domino server itself because the remote instance will only be active if the local instance isn’t.   I would use Docker clustering (both swarm and kubernetes can do this) to decide to activate the second container.

In principle I have this designed but I have lots of questions I need to test.  Not least deciding the location of the data.  Having a data container, even a clustered data container would be the simplest method.   That way the Domino container(s) would reference the same data container(s) however Domino is very demanding of disk resources and docker data containers don’t have much in the way of file system protection so I need to test both performance and stability.  This won’t work if the data can be easily corrupted.   The other idea is to have a host-based mount point but of course that could easily become inaccessible to the remote Domino container.  I have a few other things that I am testing but too long to go into in this post.  More on that later.

Issue 2: Domain Keys Indentified Mail for Domino

In its simplest explanation, DKIM requires your sending SMTP server to encrypt part of the message header and have a public key published in your DNS file that enables the receiving server to decrypt it, thereby confirming it did actually originate from your server.  It’s one of the latest attempts to control fraudelent emails and, combined with SPF records, constitutes requirements for DMARC certification.

The DKIM component of DMARC is something Domino does not support either inbound or outbound.  It may do in the future but it doesn’t right now and I am increasingly getting asked for DMARC configurations.  Devices like Barracuda can support inbound DMARC checking but not outbound DMARC encryption. The primary way I recommend doing that now is to deploy Postfix running OpenDKIM as a relay server between Domino and the outside world, your mail can then be “stamped” by that server as it leaves.

My second docker project therefore is to design and publish an image of postfix + OpenDKIM that can be used by Domino (or any SMTP server).

More on these as I progress.

 

Champions Expertise - 2018 Technology

IBM Champion Expertise presentations are a new initiave we are starting this month whereby Champions can provide audio presentations on a particular topic.  This month is “2018 Futures and Technology” and here is my presentation on what I think is going to be big for 2018, containerisation vs virtualisation and where it goes next.  This presentation has audio and I tried to keep it short but feel free to double speed me if 14 mins is too long.

I mention in my presentation that I have a more detailed presentation on docker architecture on slideshare and if you want to see that it’s here.  I’d also be grateful for any feedback on the length, style or other aspects of the presentation and what you think of the Champions Expertise idea.

Engage - Was It Really Over A Week Ago?

It’s 2am so apologies in advance for any rambling in this post but I’ve been wanting to write about the Engage conference in Antwerp ever since I got back last Thursday (and if I leave it much longer I might as well write about next  year’s conference).

This year Engage was held in Antwerp which is only a 3.5hr drive for me so we met everyone else there who came by train.  Top tip - don’t try and drive in Antwerp, the one way systems will get you every time.  Yet another beautiful city and conference location by Theo and the Engage team.  The Elizabeth conference center was spacious and since there were 400 of us and the Engage team had made sure to provide lots of seating / meeting areas, it felt right.  One thing I really enjoy at conferences is the opportunity to meet people (OK I hate approaching people to talk but I like being part of a conversation) and I had the opportunity for some great conversations with sponsors and attendees. I managed to bore people to death about my latest obsession (docker).  IBM also sent a lot of speakers this year with Scott Souder and Barry Rosen updating us on Domino and Verse futures and both Jason Roy Gary and Maureen Leland there to sprinkle some (Connections) pink around.  There was a lot of open discussion about technology now and what we were each learning and working with along with a fair amount of enthusiasm for what we’re each working with, so thanks to everyone for that.

This year the agenda expanded to including emerging technologies and one of my sessions was in that track - on IoT in the Enterprise, GDPR and data.  I try to aim my presentations at the audience I’m talking to and when it comes to IoT the IT audience naturally has a lot more concerns then line of business managers.  Outside of IT IoT is purely about opportunity but since IT need to take care of the rest my presentation was more technical with a security checklist for deploying IoT devices.  All the opportunity for businesses will inevitably involve a lot of work from IT in the areas of data retention, data analysis, security and process redesign.  Some really interesting technologies are evolving and IoT is very fast moving as evolutionary technologies are so now is the time to start planning how your business can take advantage of the incoming swarm of data and tools.

My second session was on configuring a Domino  / Cloud Hybrid solution with step by step instructions for setting up your first environment.  That presentation is on my slideshare and also shared below.  The key thing to understand about hybrid cloud is that as a Domino administrator you still manage all your users, groups, policies and your on premises and hybrid servers, in fact the only things you don’t manage are the cloud servers themselves.  Getting started with a hybrid cloud deployment is a good way to understand what the potential might be for migrating or consolidating some of your mail services.

As always the Engage team put on an amazing event, lots to sessions to learn from, lots of people to meet and a lot of fun.  I was very pleased to see Richard Moy who runs the US based MWLUG event there for the first time and I’m looking forward to attending his event in the US in August.   Finally my crowning achievement of the week was when no-one on my table could identify either a Miley Cyrus or Justin Bieber song at the closing dinner and none of us considered cheating by using Shazam (I’m looking at YOU Steph Heit and Amanda Bauman :-)).  Theo promises us Engage will be back in May 2018 at a new location.   See you there.