In an earlier post I mentioned that I have been working on new technology projects since the end of last year and I wanted to share here what I’m doing as well as plan to keep you updated on my progress if only to keep pressure on myself. I have been working with, and speaking about, Docker and containers for the past year and it was good news to hear that IBM will now support Docker as a platform for Domino (as of 9.0.1 FP10). http://www-01.ibm.com/support/docview.wss?uid=swg22013200
Good news, but only a first start. Domino still needs to be installed and run in its entirety inside a container although the data would / could be mapped outside. Ideally in a microservices model Domino would be componentised and we could have separate containers for the router task, for amgr, for updall, etc, so we could build a server to the exact scale we needed. However that is maybe in the future, right now there’s a lot we can do and two projects in particular I’m working on to solve existing issues.
Issue 1: A DR-Only Domino Cluster Mate
It’s a common request for me to design a Domino infrastructure that includes clustered servers but with at least one server at a remote location, never to be used unless in a DR situation. The problem with that in a Domino world is also Domino’s most powerful clustering feature, there is an assumption that if a server is in a cluster then it is equally accessible to the users as any other server in the cluster and, if it’s not busy and the server the user tries to connect to is busy, the user will be pushed to the not-busy server. That’s fine if all the cluster servers are on equal bandwidth or equally accessible, but a remote DR-only server that should only be accessed in emergency situations should not be part of that failover process. It’s a double edged sword – we want the DR server to be part of the cluster so it is kept up to date in real time and so users can fail over to it without any configuration changes or action on their part. We don’t want users failing over to it until we say so.
I tend to tackle this by designing the DR server to have a server_availability_threshold=100 which marks it as “busy” and prevents and client failover if the other servers are online. It works ‘ish’ but someone has to disable that setting to ensure all users failover neatly when needed and it isn’t unusual to have a few users end up on there regardless.
So what can Docker do for me?
I don’t see that much value in a standard Domino image for docker in my world. When I build a Domino server it tends to have a unique configuration and set of tasks so although it would be nice, my goal in deploying Domino under docker is very different. It is to create identical containers running identical versions of Domino with identical names e.g Brass/Turtle and Brass/Turtle. Both containers will point to external data stores (either in another container or a file system mount). Both will be part of a larger Domino cluster. Both will have the same ip address. Obviously both can’t be online at the same time so one will be online and operating as part of the cluster and only if that server or container goes down would the other container – at another location – activate. In that model we have passive / active DR on a Domino server that participates fully in workload balancing and failover. I don’t have to worry about tuning the Domino server itself because the remote instance will only be active if the local instance isn’t. I would use Docker clustering (both swarm and kubernetes can do this) to decide to activate the second container.
In principle I have this designed but I have lots of questions I need to test. Not least deciding the location of the data. Having a data container, even a clustered data container would be the simplest method. That way the Domino container(s) would reference the same data container(s) however Domino is very demanding of disk resources and docker data containers don’t have much in the way of file system protection so I need to test both performance and stability. This won’t work if the data can be easily corrupted. The other idea is to have a host-based mount point but of course that could easily become inaccessible to the remote Domino container. I have a few other things that I am testing but too long to go into in this post. More on that later.
Issue 2: Domain Keys Indentified Mail for Domino
In its simplest explanation, DKIM requires your sending SMTP server to encrypt part of the message header and have a public key published in your DNS file that enables the receiving server to decrypt it, thereby confirming it did actually originate from your server. It’s one of the latest attempts to control fraudelent emails and, combined with SPF records, constitutes requirements for DMARC certification.
The DKIM component of DMARC is something Domino does not support either inbound or outbound. It may do in the future but it doesn’t right now and I am increasingly getting asked for DMARC configurations. Devices like Barracuda can support inbound DMARC checking but not outbound DMARC encryption. The primary way I recommend doing that now is to deploy Postfix running OpenDKIM as a relay server between Domino and the outside world, your mail can then be “stamped” by that server as it leaves.
My second docker project therefore is to design and publish an image of postfix + OpenDKIM that can be used by Domino (or any SMTP server).
More on these as I progress.