Deploy a Red Hat OpenShift Container Platform

Before deploying ArcGIS Enterprise on Kubernetes to RedHat OpenShift (RHOS), you must prepare a RHOS cluster that meets ArcGIS Enterprise system requirements.

Preparing a RedHat OpenShift cluster includes steps that are common across supported environments such as setting up the Kubernetes cluster and corresponding nodes, and steps that are environment specific such as creating a dedicated security context constraint (SCC) and handling ingress to the application.

Review the following steps and refer to RedHat OpenShift documentation for more detailed instructions on how to prepare your environment.

  1. Create a RedHat OpenShift cluster.

    There are many methods by which a RHOS cluster can be deployed. For on-premises deployments, refer to RedHat's documentation. For cloud-based deployments, refer to RedHat OpenShift on AWS (ROSA) or Azure RedHat OpenShift documentation.

  2. Update oc (kubectl) configuration.

    After creating the cluster, the OpenShift Command Line Interface (CLI) can be used to pull the authenticated user connection information into your kubeconfig file. This can be done using the following command:

    oc login https://<serverName>:<serverPort>
    
  3. Create storage classes.

    To tailor reclaimPolicy and allowVolumeExpansion properties to the needs of your organization and workloads, it is recommended that you create a storage class referencing one of the supported provisioners such as Cinder, Manila, or vSphere Volume. The appropriate YAML file should be applied to the cluster using the following command:

    oc apply -f <storageClass.yaml>
    
    See a sample default storage class YAML and a sample backup storage class YAML for more information.
  4. Manage security context constraints.

    In OpenShift, SCCs act as a customized admission controller for pods prior to scheduling. By default, all pods are admitted based on the restricted or restricted-v2 SCCs since the allowed group is system:authenticated.

    Depending on your setup, you may need to allow additional privileges to meet ArcGIS Enterprise requirements. Review the following:
    1. Allow pods to run with a specific fsGroup.

      ArcGIS Enterprise workloads have a hard-coded fsGroup value to initialize volume permissions on newly provisioned block storage persistent volumes. This fsGroup must be allowed by a SCC since the default is to run as an arbitrary range. To do this, a copy of the restricted or restricted-v2 SCC should be updated with the following section:

      fsGroup:
         ranges: 
           - max: 117932853
             min: 117932853
         type: MustRunAs
         groups:
            - 'system:serviceaccounts:<deployment-project>'
      

    2. Allow binding on privileged ports for the ingress controller.

      For versions of OpenShift prior to 4.11, the restricted context does not allow the NET_BIND_SERVICE capability to be added to pods, which is required for the ingress controller. A new SCC should be cloned from the restricted policy and the following section appended:

      allowedCapabilities: 
         - NET_BIND_SERVICE
      

      This SCC should allow the arcgis-ingress-serviceaccount service account to allow the ingress controller to start properly.

      oc adm policy add-scc-to-user restricted-esri system:serviceaccount:<deployment-project>:arcgis-ingress-serviceaccount
      

      For OpenShift version 4.11 and later, the restricted-v2 SCC already has the required capability allowed.

    3. Increase max memory map areas.

      The spatiotemporal StatefulSet requires the underlying node to have an increased value for vm.max_map_count. This is enabled by default using an init container, but the command requires privileged access to the underlying host to run:

      sysctl -w vm.max_map_count=262144
      

      The worker nodes can be altered upon startup by making modifications to the Ignition configuration file to run that command during bootstrapping. The property for ALLOW_PRIVILEGED_CONTAINERS should be set to false in the deployment script.

      See the RedHat OpenShift documentation for additional details.

      Alternatively, you must give the arcgis-elastic-serviceaccount service account permission to run a privileged container to allow the init container to run the required sysctl command.

      oc adm policy add-scc-to-user privileged-esri system:serviceaccount:<deployment-project>:arcgis-elastic-serviceaccount
      

    Review the sample SCC YAML files for more information.

  5. Configure Red Hat OpenShift Routes.

    OpenShift includes an out-of-the-box ingress controller that can be used in conjunction with the ingress controller shipped with ArcGIS Enterprise on Kubernetes. If creating an OpenShift Route through the command line, it can be created using the sample yaml. To set up a route through the OpenShift console, the deployment script must be run first, then a route can be created to reference the internal service target.

    By answering yes to the OpenShift Route question in the deployment script, the arcgis-ingress-controller service will not be exposed outside the cluster subnet and will be created as the ClusterIP type.

    Depending on where you prefer to terminate client SSL, the secure route should use either re-encrypt or passthrough as the termination mode. Selecting re-encrypt will require a TLS certificate as an input for the route while passthrough will present the TLS certificate defined during the deployment phase to external clients.