System requirements

The minimum hardware and infrastructure required to run ArcGIS Enterprise on Kubernetes 11.0 are described below. These requirements also apply when deploying in a disconnected environment.

Supported environments

System requirements and specifications apply across all supported environments except where noted. For this release, the following environments are supported:

  • On-premises data center
    • Red Hat OpenShift Container Platform
  • Managed Kubernetes services on the cloud
    • Amazon Elastic Kubernetes Service (EKS)
    • Google Kubernetes Engine (GKE)
    • Microsoft Azure Kubernetes Service (AKS)

At this release, the following versions for each environment have been tested and are supported:

ArcGIS Enterprise on Kubernetes AKSEKSGKERed Hat OpenShift

Deploy 11.0 to an existing Kubernetes cluster

1.21 - 1.24

1.21 - 1.24

1.21 - 1.24

4.7 - 4.10

Upgrade an existing Kubernetes cluster post 11.0 deployment

Not supported

Not supported

Not supported

N/A

Note:

To enable auto scaling for GIS services, you will need Metrics Server to collect node metrics. When deploying in EKS environments, you must install Metrics Server with cluster-level privileges in the kube-system namespace. Metrics Server is installed by default in other supported environments.

Container registry

Container images for ArcGIS Enterprise on Kubernetes are accessible from a private Docker Hub repository. Esri will provide access to this repository to those who are deploying ArcGIS Enterprise on Kubernetes. When deploying in a disconnected environment, you will need to use your organization's container registry.

Obtain an Esri license

To authorize your ArcGIS Enterprise organization during deployment, you need an ArcGIS Enterprise on Kubernetes license file in JSON format (.json file). To obtain this license file, visit My Esri with privileges to take licensing action.

Kubernetes cluster

To deploy ArcGIS Enterprise on Kubernetes, you must have a Kubernetes cluster on one of the environments mentioned above.

Note:

When creating a cluster in GKE, you must use the Standard mode of operation. Autopilot mode is not supported.

Note:

In EKS, when creating or upgrading a cluster with Kubernetes 1.23 and later, you need to install the Amazon EBS Container Storage Interface (CSI) add-on. See the Amazon EKS documentation for details.

Namespace

ArcGIS Enterprise on Kubernetes requires its own dedicated namespace. The namespace must be created before running the deployment script. Each deployment of ArcGIS Enterprise on Kubernetes also requires a dedicated namespace.

CPU and memory

ArcGIS Enterprise on Kubernetes is deployed with one of three architecture profiles. Recommendations for resource (CPU and memory) requests and limits and overall compute requirements vary based on the selected profile. Recommendations for each profile are provided below.

The following are the minimum node requirements for each architecture profile. It is recommended that each worker/agent node have a minimum of 8 CPU and 32 GiB of memory.

Architecture profileMinimum worker/agent nodesTotal minimum CPUTotal minimum GiB

Standard availability

4

32

128

Enhanced availability

5

40

160

Development

3

24

96

Note:

ArcGIS Enterprise on Kubernetes is only supported on CPUs that adhere to the x86_64 architecture (64 bit).

The pods in the ArcGIS Enterprise on Kubernetes deployment are distributed across the worker nodes in the cluster. When scaling the deployment or adding another ArcGIS Enterprise deployment to the cluster, you need to provision hardware accordingly. This may require an increase in the default maximum number of pods per node. The number of pods that are initially created varies with each architecture profile. As you scale horizontally or add new functionality, the number of pods increases.

Note:

ArcGIS Enterprise on Kubernetes does not support Windows Server node images in GKE environments.

Resource Quota object

ArcGIS Enterprise on Kubernetes pods have defined requests and limits for CPU and memory. If the namespace has a ResourceQuota object, the quota must be higher than the sum of all the pods' requests and limits. These values vary based on the architecture profile you've selected, as described below.

Note:

If you are performing an upgrade to 11.0, you must first update the resource quota values in the namespace to the 11.0 requirements.

It is recommended that you set aside at least 10 percent of request resources for proper functioning of the cluster nodes.

The following quota recommendations for each profile are based on the set asides described above. The limit values depicted are placeholders and must be configured based on your scalability requirements:

Standard availability profile:

spec: 
    hard: 
      limits.cpu: "164" 
      limits.memory: 272Gi 
      requests.cpu: "24" 
      requests.memory: 108Gi

Enhanced availability profile:

spec: 
    hard: 
      limits.cpu: "192" 
      limits.memory: 328Gi 
      requests.cpu: "30" 
      requests.memory: 156Gi

Development profile:

spec: 
    hard: 
      limits.cpu: "120" 
      limits.memory: 188Gi 
      requests.cpu: "16" 
      requests.memory: 72Gi

Security

The security requirements for ArcGIS Enterprise on Kubernetes are described below.

Role-based access control

Role-based access control (RBAC) must be enabled on the Kubernetes cluster. To deploy ArcGIS Enterprise on Kubernetes, you do not need cluster-admin privileges. If you do not have cluster-admin privileges, the user must have minimum namespace administrative privileges. You can assign defaultClusterRole admin to the user by creating a RoleBinding in the namespace.

Pod security policy (security context constraints in OpenShift) and virtual memory

ArcGIS Enterprise on Kubernetes deploys Elasticsearch to support various features of the ArcGIS Enterprise organization. By default, Elasticsearch uses the mmapfs directory to store required indices. The default operating system limits on map counts may be insufficient for deployment. Elasticsearch recommends a default vm.max_map_count value of 262144. To change the default value, an elevated (root) privilege is required on each node.

Depending on whether the Kubernetes cluster includes a pod security policy and allows containers to run as privileged or unprivileged, the following actions are required:

  • If the Kubernetes cluster does not include a pod security policy but allows containers to run as privileged, no action is required.
  • Run as privileged—If the Kubernetes cluster has pod security defined and allows containers to run as privileged, you must allow the Elasticsearch service account to run containers as privileged. Other service accounts do not need to run containers as privileged. ArcGIS Enterprise on Kubernetes can run a privileged init container on the node running Elasticsearch, which changes the vm.max_map_count value. The ArcGIS Enterprise on Kubernetes deployment script creates a service account under its namespace to use API Server authentication for processes inside the pods. The Elasticsearch pod uses its own service account, which is not shared with other workloads. The default Elasticsearch service account is arcgis-elastic-serviceaccount. You can grant the service account access to the pod security policy with RBAC Roles and RoleBindings. For OpenShift, you can grant the service account access to the privileged security context constraints by adding the following in the user section:
    “-system:serviceaccount: <Namespace>:arcgis-elastic-serviceaccount"
    
  • Run as unprivileged—If the Kubernetes cluster has pod security defined and cannot allow the ElasticSearch service account to run containers as privileged, you must prepare each node manually by running the following command as root:
    sysctl -w vm.max_map_count=262144
    
  • If you have created the PodSecurityPolicy resource, you will need to authorize the following service accounts in the ArcGIS Enterprise namespace.
    • arcgis-admin-serviceaccount
    • arcgis-elastic-serviceaccount
    • arcgis-ingress-serviceaccount
    • arcgis-prometheus-serviceaccount
    • arcgis-queue-serviceaccount
    • default

    ArcGIS Enterprise on Kubernetes containers can run without root privileges. However, the control aspect of fsGroup and supplementalGroups of PodSecurityPolicy must have either RunAsAny or a range that includes the value 117932853 as shown in the following examples.

    supplementalGroups:
        rule: 'RunAsAny'
    fsGroup:
        rule: 'RunAsAny'
    
    supplementalGroups:
      rule: 'MustRunAs'
      ranges:
        # Forbid adding the root group.
        - min: 1
          max: 117932853
    
    fsGroup:
      rule: 'MustRunAs'
      ranges:
        # Forbid adding the root group.
        - min: 1
          max: 117932853
    

If you use Kubernetes NetworkPolicies, ensure that uninterrupted pod-to-pod and pod-to-service communication is allowed in the ArcGIS Enterprise namespace.

In addition, ensure that the pods in the namespace have access to the Kubernetes API server. The API server is accessible through a service named Kubernetes in the default namespace. ArcGIS Enterprise pods use the fully qualified domain name (FQDN) kubernetes.default.svc.cluster.local to query the API server.

Note:

cluster.local is the default domain of the cluster.

Note:

Pods in the cluster must be allowed to run with a FSGroup and SupplementalGroup ID of 117932853.

Register a data folder

To publish items using file-based data, like items published from a file geodatabase, you will need to place the data in an NFS shared location. This NFS share must be registered with the ArcGIS Enterprise organization to avoid copying the data to the server while publishing. To register the shared folder successfully, you will need to grant file level read permissions to others. You can secure the NFS share at the network or infrastructure level by allowing network access to the pod IP range.

Network

Network requirements include a FQDN and load balancer. Details for each are provided below.

Fully qualified domain name

ArcGIS Enterprise on Kubernetes requires a FQDN (for example, map.company.com). You can use an existing domain name system (DNS) to create one or use a Cloud DNS service such as Amazon Route 53. You can create the DNS record after deployment; however, you must provide its value during deployment. At this release, the FQDN cannot be modified after deployment.

Load balancer

A load balancer is required to direct traffic across each worker node. When using AKS or EKS, you can provision the following load balancers from the deployment script without manual configuration:

  • Azure Load Balancer (public or internal)—A preprovisioned static public IP address and DNS label can be specified in the deployment script.
  • AWS Network Load Balancer (internet-facing or internal)—Other load balancing services can be used; however, they must be configured manually with each cluster node.
    Note:

    The AWS Load Balancer Controller add-on is required to create Network Load Balancers in either a public or private subnet.

In an OpenShift Container Platform, routes can be configured when pointing to the ingress controller service.

You can use a self-managed load balancer pointing to the worker nodes on the ingress controller service's NodePort. For details, see the deployment guide's parameter description for load balancer.

When using a self-managed load balancer or reverse proxy such as NGINX, specify the following connection: proxy_set_header X-Forwarded-Host $host;. This header is needed to ensure that traffic is properly routed to your ArcGIS Enterprise organization's URL.

IP requirements

Planning your cluster network in advance is essential for ensuring a successful deployment, appropriate scaling requirements, and the ability to upgrade. ArcGIS Enterprise on Kubernetes initially deploys 47-66 pods, depending on the architecture profile. The number of pods will increase as additional capabilities are added, the deployment is scaled, and during the upgrading process.

Each pod is assigned a unique IP address, and depending on the cluster network configuration, pods can either get their IP addresses from a logically different address space from that of the host network (an overlay network) or from the host subnet. For example, if you configure your cluster to use Kubenet in Azure (default), pods will receive an IP address from a logically different address space and will be able to reach Azure resources using NAT.

Kubernetes supports Container Network Interface (CNI) and platforms like AKS and EKS, which use platform specific CNI plugins for cluster networking. For example, EKS clusters use Virtual Private Cloud (VPC) CNI by default. If the cluster is configured with a CNI plugin, pods will receive IP addresses from the host subnet and a corresponding pool of IPs available in the VPC/VNet.

If you do not have a sufficient number of IPs available in the host subnets, the deployment will either fail, or you will not be able to scale the deployment. For example, if an EKS cluster is configured with 2 subnets each and a /26 IPv4 address prefix (64 available IPv4 addresses each), then there cannot be more than 126 IP addresses available for the pods. While you may be able to deploy ArcGIS Enterprise on Kubernetes in this cluster you will not be able to scale the deployment to have 80 feature service pods, as this scaling requirement will exceed the number of IP addresses available.

System storage

ArcGIS Enterprise on Kubernetes requires persistent volumes (PVs) for system storage. They can be provisioned as dynamic or static. When creating PVs of either type, you can use custom sizes (larger size) and labels. Stateful workloads of ArcGIS Enterprise include relational database management systems as well as NoSQL databases. It is recommended that you use block storage devices that provide low latency such as EBS volumes, Azure Disks, or vSphereVolume.

Because these persistent volumes store data and settings, they should be protected using restrictive security policies. For persistent volumes based on file-based storage, such as NFS, Azure File, or Gluster, ensure that the permissions to the directories are set to prevent unauthorized access. For block storage, such as EBS volumes, Azure Disk, and iSCSI, ensure that the block devices are limited to only those users needing access.

The following are descriptions of storage volumes and their intended purpose:

Note:

Persistent volume requirements are stated for 11.0 and may differ from prior versions.

  • In-memory—Stores temporary system resources.
  • Item packages—Stores large uploads and packages to support publishing workflows.
  • Object—Stores uploaded and saved content, hosted tile and image layer caches, and geoprocessing output. Four are required for deployment.
  • Queue—Stores asynchronous geoprocessing jobs.
  • Relational—Stores hosted feature data and administrative aspects such as customization and configuration settings. Two are required for deployment.
  • Spatiotemporal and index—Stores logs and indexes as well as hosted feature data.
  • Usage metric data—Stores GIS service usage data.

Consider the storage requirements for your organization and define the size for each PV accordingly.

Static PVs

If you're provisioning static PVs prior to deployment, the specifications and labels described below are recommended.

The number of PVs required for each architecture profile are provided.

VolumeDevelopment profileStandard availability profileEnhanced availability profile

in-memory-volume

1

1

1

item-packages-volume

1

2

2

object-volume

1

3

8

queue-volume

1

2

2

relational-volume

2

2

2

spatiotemporal-and-index-volume

1

3

5

usage-metric-volume

1

1

1

When configuring an organization with the setup wizard, specifications such as the following (volume name, size, app, and tier) can be used for volume binding; however, you can customize them as needed:

VolumeSize in GiB (minimum)Access modeLabel

in-memory-volume

16

ReadWriteOnce

arcgis/tier=storage,

arcgis/app=ignite

item-packages-volume

16

ReadWriteOnce

arcgis/tier=api,

arcgis/app=sharing

object-volume

32

ReadWriteOnce

arcgis/tier=storage,

arcgis/app=ozone

queue-volume

16

ReadWriteOnce

arcgis/tier=queue,

arcgis/app=rabbitmq

relational-volume

16

ReadWriteOnce

arcgis/tier=storage,

arcgis/app=postgres

spatiotemporal-and-index-volume

16

ReadWriteOnce

arcgis/tier=storage,

arcgis/app=elasticsearch

usage-metric-volume

30

ReadWriteOnce

arcgis/tier=storage,

arcgis/app=prometheus

Additional considerations for static PVs

The type of provisioned storage that you configure during deployment will determine requirements for upgrades and scaling:

  • Dynamic PVs—Storage is scaled and adjusted by the software, provided that sufficient storage is available, and storage-class specifications are met.
  • Static PVs—If you are provisioning for your deployment, you will need to provision additional item-packages PVs with the same specifications (labels, size, and access mode) as those specified during deployment to support scaling and upgrade workflows.

Adjust static PVs to scale the Portal API deployment

To scale the Portal API deployment (to increase the number of participating pods), an additional item-packages PV is required for each additional pod that is added to the deployment. For example, if the organization requires three additional pods for the Portal API deployment, a minimum of three additional item-packages PVs must be provisioned and configured with equivalent specifications to those specified during deployment.

Dynamic PVs

For dynamic provisioning, a StorageClass is required.

The reclaimPolicy parameter on the StorageClass must be set to retain.

Note:
Not all VM types support premium disks in Azure. Use a premium disk when the VM type supports it.
  • For AKS, the following is an example of a StorageClass definition with Premium Azure Disk:
    kind: StorageClass 
    apiVersion: storage.k8s.io/v1 
    metadata: 
      name: arcgis-storage-default 
    provisioner: kubernetes.io/azure-disk 
    parameters: 
      kind: Managed 
      storageaccounttype: Premium_LRS 
    reclaimPolicy: Retain
    allowVolumeExpansion: true
    volumeBindingMode: WaitForFirstConsumer
    
  • For EKS, the following is an example of a StorageClass definition with GP2 type EBS volumes:
    kind: StorageClass 
    apiVersion: storage.k8s.io/v1 
    metadata: 
      name: arcgis-storage-default  
    provisioner: kubernetes.io/aws-ebs 
    parameters: 
      fsType: ext4 
      type: gp2 
    reclaimPolicy: Retain
    allowVolumeExpansion: true
    volumeBindingMode: WaitForFirstConsumer
    

You can also use the default storage classes provided with an AKS or EKS cluster. In AKS, these are default (Azure disk) or managed-premium storage class. In EKS, this is a GP2 storage class.

Client workstation

The deployment scripts are bash scripts that can be run from a remote client workstation.

Note:

The client workstation used must be in accordance with supported environments. Linux emulators are not supported to deploy ArcGIS Enterprise on Kubernetes.

You need the following when setting up your client workstation (download links are provided):

  • Kubectl
  • An environment-specific command line interface (CLI)

Kubectl is a prerequisite to run the deployment script. Use Kubectl installation and setup to download the Kubernetes command line tool.

When managing your deployment, you can use environment-specific command line tools. Use the following links to download an environment-specific CLI:

TLS certificate

ArcGIS Enterprise on Kubernetes uses an NGINX-based ingress controller. This ingress controller is namespace scoped and is deployed to listen to only ingress traffic for the ArcGIS Enterprise namespace. A TLS certificate is required with the FQDN in the certificate common name and subject alternate name. Either a CA-signed certificate or a self-signed certificate can be used; however, for security reasons, a CA-signed certificate is recommended. This is the default TLS certificate for the ingress controller.  The following certificate options are available in the deployment script to apply a TLS certificate for ingress traffic:

  • An existing TLS secret that contains a private key and certificate
  • A .pfx file that contains a private key and certificate
  • A PEM-formatted private key and certificate
  • A self-signed certificate

ArcGIS Enterprise on Kubernetes supports using a TLS certificate for the ingress controller that is issued and managed by Kubernetes cert-manager. This certificate must be stored in a TLS secret in the same namespace as ArcGIS Enterprise. The TLS secret can then be referenced either during deployment or after the ArcGIS Enterprise organization is created.

ArcGIS Pro

  • ArcGIS Pro 3.0 is the companion release for ArcGIS Enterprise on Kubernetes 11.0. To benefit from the latest features available, use ArcGIS Pro 3.0.
  • To publish services to ArcGIS Enterprise on Kubernetes, ArcGIS Pro 2.8 or later is required.
  • To consume services from ArcGIS Enterprise on Kubernetes, ArcGIS Pro 2.7 or later is required.

When registering a data store item from an enterprise geodatabase, the geodatabase version must be 10.9.0.2.8 or later.

Note:
To benefit from the latest features available, upgrade your geodatabase version to 11.0.0.3.0.
The geodatabase version number is a combination of ArcGIS Enterprise and ArcGIS Pro release numbers. For more information, review client and geodatabase compatibility.

Upgrade and update requirements

Before performing an upgrade, you must meet several requirements, such as the following:

  • If a required update is available, you must apply it before upgrading to this release. Review the release notes for details about the latest required update.
  • You must have a unified ArcGIS Enterprise on Kubernetes license for this release.
  • You must update the resource quota values in your namespace to meet the current requirements.
  • If you have provisioned static PVs, you must provision additional storage to accommodate upgrade requirements. See the Adjust static PVS for upgrade section for further details.
  • If your provisioned storage has dynamic PVs, you need to ensure that sufficient storage is available for the additional item-packages, object volume, and queue volume.
  • If you are upgrading from version 10.9.1, you must provision at least fifty percent (50%) of additional storage to accommodate new object store PVs for each architecture profile. For example, if you have allocated 100GB of storage for object store PV per object store pod, you must provision at least 150GB of additional storage.
  • Run the preupgrade script. This script will detect and address any functional requirements to meet the current software release.
  • If you have configured a web adaptor with your organization, review installation and upgrade requirements.
  • If your organization is in a disconnected environment, follow steps to apply an upgrade or update in disconnected environments.
  • If you used your organization's container registry when deploying ArcGIS Enterprise on Kubernetes, you must copy the required container images from the Esri repository to your organization's registry before running the update or upgrade.

Adjust static PVs for upgrades

Prior to an upgrade, each pod in the organization's Portal API and Queue Store deployments has been configured with either an item-packages or queue-volume PV, respectively. In preparation for an upgrade, each pod in the Portal API and Queue Store deployments must be configured with an additional PV.

For example, prior to an upgrade, if either the Queue Store or Portal API deployment is configured with three running pods, three additional PVs must be provisioned and configured with equivalent specifications as those specified during deployment.

Once the upgrade is complete, the Portal API or Queue Store deployment will use the newly provisioned persistent volumes and persistent volume claim, and the original set can be removed.

Additional static PVs will need to be provisioned for Object Store deployments when upgrading according to the table below:

Deployment typeDefault static object-volume PVsAdditional static PVs required to upgrade

Development profile

1

Create 1 additional PV

Standard availability profile

3

Create 3 additional PVs

Enhanced availability profile

8

Create 8 additional PVs