Clive, one of our lead Solution Architects with a lifetime of delivering IBM FileNet solutions, continues his Journey to the Cloud – this time sharing his experience of simplifying the Content component deployment of IBM Cloud Pak for Business Automation.
To catch up on his previous blogs – here are parts 1, part 2, part 3, part 4 and part 5
Simplifying the Content component deployment of IBM Cloud Pak for Business Automation
If you’ve read previous posts, you’ll know that I have been tinkering with the deployment of IBM Cloud Pak for Business Automation for both customer and demo purposes.
One major issue of the deployment is the complexity and resource requirements needed for the Cloud Pak deployment, even if all that is required is the Content Components, formally known as FileNet P8 and now FileNet Content Manager (FNCM).
In this post, I will introduce the FileNet Content Manager deployment and updates to Openshift cluster creation for both Production and demonstration/test purposes.
Let’s start with OpenShift.
The Red Hat/IBM team has been adding features to assist deployment of an Openshift Cluster on-premise and adding more cloud providers.
I’m going to talk about the use of on-premise bare metal and virtualised deployment of an Openshift Cluster. There are three methods for achieving this, and Red Hat has simplified each.
A Test/Development environment
Red Hat now has two offerings:
- The OpenShift Local, previously known as Code Ready Containers
- Single Node Openshift. I’ve not tried the deployment on Openshift Local but plan to and will report back my findings.
The Assisted Installer
For a Single Node Openshift Cluster, it is now possible to use the Assisted Installer to set up the cluster config, download a bootable ISO and deploy a single machine, either bare metal or in a virtualised environment. This just needs a couple of DNS entries, and away it goes!
This same method can run either a compact Openshift Cluster with three nodes or a full-blown cluster with three master nodes and as many worker nodes as licenses and infrastructure allow. The added advantage of this is the load balancing is managed inside the Openshift cluster, and only 2 Virtual IP Addresses are used to connect.
The Installer Provisioned Infrastructure (IPI)
Using the IPI for deployment has all the benefits of the Assisted Installer for simplicity of management but further simplifies the process as it creates the machines it needs either inside a Virtual Machine cloud provider such as AWS or on a vSphere cluster on-premise. During the initial config, credentials for the vsphere environment are provided, alongside the resources for each machine set, masters and compute nodes, including HDD size, CPUs and RAM. The provisioner will create the machines on the vSphere cluster based on the specs and start the cluster.
The User Provider Infrastructure
The final method is the User Provider Infrastructure, which is still the most complicated, but steps have been made to improve the deployment. This still needs the load balancer configured, and the machines created manually, booting from an ISO image. I think this is still the method that will be used for creating Production environments as the IPI relies on DHCP for IP addresses. There is currently no way to provide reserved static addresses for the nodes, so most infrastructure teams will not have the control they want.
Further details of these methods can be found in the Openshift documentation:
https://docs.openshift.com/container-platform/4.10/installing/index.html
Microshift
There is also a new project within RedHat that looks interesting for deployments on the edge, taking the single node one step further. This project is called Microshift and is a truly cutdown version of an Openshift Single Node deployment. Minimum requirements are 2 CPU cores, 2GB RAM and 1GB free storage for Microshift. This is great for developers and maybe even to shift IBM Navigator out to remote offices. RedHat also suggests that multi-node clusters are in the pipeline, but nothing has been confirmed.
Options for FileNet Content Manager
Having a more straightforward method of deploying the Openshift Cluster, I started looking at options for FileNet Content Manager. When IBM introduced containers to the FileNet world, there was a way of deploying the containers in Docker to give a fully working Content Platform Engine (CPE), Navigator, CSS, DB2 and OpenLDAP environment into a demo environment. This was very lightweight, and I had it running inside a virtual machine on my DELL XPS13 Windows 10 machine with four core processors and 16GB RAM. It would never be a production strength, but it was portable and served me well. The production version was a Kubernetes deployment, but I had ignored this for some time due to the promise of the IBM Cloud Pak for Business Automation (ICP4BA) features.
Having revisited this Kubernetes deployment, currently inside Openshift, I have been relieved by the relative simplicity and stability of the deployment. It still uses an operator and a config in YAML form but gone are the 100,000 + line log file, replaced with a mere 15-20k one. The resource requirements are lowered as there is no need for all the operators and no sign of the IBM-Common-Services that provide single sign-on across the ICP4BA and license monitoring system.
The downside is the limited available components: Content Platform Engine (including the original FileNet Workflow), Navigator, GraphQL, CSS, CMIS, External Share and Task Manager. The YAML allows for standard IBM T-Shirt-sized deployments of Small, Medium and Large, but there are options for overriding this based on needs, including the number of pods for each component and also the resource requests for memory and CPU, albeit not a recommended approach from IBM. Having said that, running a single node Openshift Cluster for use in a demo or dev environment would usually only have a single user, so having the ability to expand the memory and CPU use is probably not on top of the requirements list.
The settings can also be changed in the opposite direction should licenses allow and use patterns require. Imagine a production environment with a steady state user population, so the current 2 CPE pods, 3 Navigator pods and 2 CSS pods are working well. Now the system needs to take on several million new documents; the database is sized and ready, but what happens to FileNet Content Manager? Why not add a couple of CPE pods specifically to deal with the import load, leaving the steady state for users? Once the import has been completed, the system can return to the 2 CPE pods in the steady state.
Next Steps
The next steps for me are to look at Microshift in more detail, Openshift Local, running Business Automation Insights (BAI) alongside in a standalone deployment and apply the Business Automation Workflow (BAW) deployment in containers in the same way as FileNet Content Manager, checking they work together and comparing with ICP4BA to see what is missing. Further investigation is also planned using the recently added Windows Container support to see whether it is something Datacap can utilise.
If you enjoyed this blog, why not sign up below to receive an email when the next blog is published?