The minimum hardware and infrastructure required to run ArcGIS Enterprise on Kubernetes 11.4 are described below. These requirements also apply when deploying in a disconnected environment.
Supported environments
System requirements and specifications apply across all supported environments except where noted. For this release, the following environments are supported:
- Red Hat OpenShift Container Platform (RHOS)
- Amazon Elastic Kubernetes Service (EKS)
- Google Kubernetes Engine (GKE)
- Microsoft Azure Kubernetes Service (AKS)
- Rancher Kubernetes Engine (RKE and RKE2)
It is recommended that you disable auto-upgrade in your Kubernetes cluster. When your cluster is enabled with auto-upgrade, the nodes will automatically be updated with the latest version of Kubernetes and these future versions may not yet be supported for ArcGIS Enterprise.
Caution:
To ensure full compatibility and support, confirm that your Kubernetes version falls within the supported range before upgrading ArcGIS Enterprise on Kubernetes.The following versions for each environment have been tested and are supported:
Supported environment | Supported Kubernetes version |
---|---|
Managed Kubernetes services on the cloud (AKS, EKS, GKE) Red Hat OpenShift Container Platform(including ROSA and ARO) [4.15-4.16] RKE and RKE2 | 1.29 - 1.30 |
Note:
ArcGIS Enterprise on Kubernetes is only supported on CPUs that adhere to the x86_64 architecture (64 bit). Worker nodes must be Linux based.
Container registry
Container images for ArcGIS Enterprise on Kubernetes are accessible from a private Docker Hub organization. Esri will provide access to this organization's repositories to those who are deploying ArcGIS Enterprise on Kubernetes. When deploying in a disconnected environment, you will need to push images from the private Docker Hub organization to your own private container registry that is accessible from your cluster.
Obtain an Esri license
To authorize your ArcGIS Enterprise organization during deployment, you need an ArcGIS Enterprise on Kubernetes license file in JSON format (.json file). To obtain this license file, visit My Esri with privileges to take licensing action.
Worker nodes
ArcGIS Enterprise on Kubernetes is deployed with one of three architecture profiles. Recommendations for resource (CPU and memory) requests and limits and overall compute requirements vary based on the selected profile. Recommendations for each profile are provided below.
Note:
The following are the minimum node requirements to install the software. Requirements vary based on the selected architecture profile. Additional worker nodes are required to support scaling workloads and enabling capabilities in your organization.It is recommended that each worker/agent node have a minimum of 8 CPU and 32 GiB of memory. To compensate for the download of the container images associated with ArcGIS Enterprise on Kubernetes, it is also recommended that you have a minimum root disk size of 100 GiB.
Architecture profile | Minimum worker/agent nodes | Total minimum CPU | Total minimum GiB |
---|---|---|---|
Enhanced availability | 5 | 40 | 160 |
Standard availability | 4 | 32 | 128 |
Development | 3 | 24 | 96 |
The pods in the ArcGIS Enterprise on Kubernetes deployment are distributed across the worker nodes in the cluster. The worker node clocks must be synchronized to a common source so that times are consistent within the cluster. When scaling the deployment or adding another ArcGIS Enterprise deployment to the cluster, you need to provision hardware accordingly. This may require an increase in the default maximum number of pods per node. The number of pods that are initially created varies with each architecture profile. As you scale horizontally or add functionality, the number of pods increases.
Security
The security requirements for ArcGIS Enterprise on Kubernetes are described below.
Role-based access control
Role-based access control (RBAC) must be enabled on the Kubernetescluster. To deploy ArcGIS Enterprise on Kubernetes, you do not need cluster-admin privileges. If you do not have cluster-admin privileges, the user must have minimum namespace administrative privileges. You can assign the user a default ClusterRole by creating a RoleBinding in the namespace. For more information, review the RBAC role resource.
Register a data folder
To publish items using file-based data, such as items published from a file geodatabase on a network share, collections of cache tiles, or locators, you can host the data in an NFS shared location or add a PV-based folder data store. To ensure that GIS and system services can access the shared location as appropriate, the user and group ID for the running service pods or others directory permissions must provide read access to the share, its subdirectories, and the files contained within it. After registering this folder data store with the organization, you can avoid the need to copy data to the organization during publishing. For NFS servers, you can manage security at the network or infrastructure level by only allowing network access to the pod IP range for the export.
Network
Network requirements include a fully qualified domain name (FQDN) and a load balancer or reverse proxy that directs traffic from clients over the standard HTTPS port (443) to the configured back-end targets. Details for each are provided below.
Fully qualified domain name
ArcGIS Enterprise on Kubernetes requires a FQDN (for example, map.company.com). You can use an existing domain name system (DNS) service to create a CNAME or A record, or integrate with a cloud provider's DNS service such as Amazon Route 53. You can create the DNS record after deployment; however, the FQDN must be provided during the deployment process. At this release, the FQDN cannot be modified after deployment.
Load balancer
A load balancer can be used to direct traffic to your ArcGIS Enterprise on Kubernetes deployment. You can provision both layer 4 and layer 7 load balancers from the deployment script without manual configuration. Load balancers that are integrated with your deployment from the deployment script route traffic to the in-cluster NGINX ingress controller pod directly.
Alternatively, you can use ArcGIS Enterprise on Kubernetes Web Adaptor to route traffic to your deployment, which requires that incoming traffic be sent to the deployment's worker nodes on a specific port.
The following layer 4 load balancers can be provisioned from the deployment script without manual configuration:
- Azure Load Balancer (public or internal)—A preprovisioned static public IP address and DNS label can be specified in the deployment script.
- AWS Network
Load Balancer (internet-facing or internal)—Custom annotations can be used to customize the load balancer when deploying silently. See the custom ingress annotations section of additional silent deployment properties for more information.
Note:
The AWS Load Balancer Controller add-on is required to create Network Load Balancers in either a public or private subnet.
- Google Cloud Platform TCP Load Balancer (internet-facing or internal)—A preprovisioned static public IP address can be specified in the deployment script.
- Generic load balancer—Supports layer 4 solutions like MetalLB, Traefik, HAProxy, and others that an organization may already be familiar with operating in their on-premise infrastructure. When using a managed cloud provider's Kubernetes cluster, the corresponding integrated cloud option should be chosen, while for self-managed clusters, the generic load balancer option appends any annotations defined in the deployment properties file as custom annotations.
The load balancers specified above can be customized using annotations on the underlying Kubernetes Service object. For example, you can enable cross-zone load balancing on a Network Load Balancer in AWS or place an Azure Blob Load Balancer into a specific resource group. The deploy.properties template file included with the deployment script contains examples of how these annotations can be specified during silent deployment.
The deployment script can be used to employ layer 7 load balancing capabilities such as a web application firewall or to meet organization requirements for ingress to the deployed application using a pre-existing ingress controller. The following layer 7 load balancers can be deployed or integrated directly from the deployment script:
- AWS Application Load Balancer
Note:
The AWS Load Balancer Controller add-on is required to create Application Load Balancers in either a public or private subnet.
- Azure Application Gateway
- Google Cloud Platform Application Load Balancer
RedHat OpenShift provides Routes, which is an integrated construct to direct traffic from external clients to services that exist within the cluster. This option requires an administrator to create the Route outside of the deployment script as the properties may vary based on organizational requirements. The Route object can be created from the console or from a YAML and should use end-to-end TLS encryption to proxy traffic to the bundled ingress controller, but it can use passthrough or re-encryption mode.
An ingress controller implemented at the cluster-level will use these load balancers to implement cluster-level Ingress rules to route incoming traffic to an ArcGIS Enterprise on Kubernetes deployment. To implement these load balancers from the deployment script prior to silently deploying, you can modify a template YAML file in the layer-7-templates folder, save it to your client workstation, and specify this location for the CLUSTER_INGRESS_CONTROLLER_YAML_FILENAME parameter. See Cluster-level ingress controllers for more information.
Note:
To support Notebook services, any external reverse proxies or load balancers must have requirements set so that sessions remain open for 10 minutes.
When using a self-managed load balancer or reverse proxy such as NGINX, the X-Forwarded-Host header must be set to the Host header value of the client-facing URL to ensure that traffic is properly routed to your ArcGIS Enterprise organization's URL. In NGINX, this can be achieved using the following directive: proxy_set_header X-Forwarded-Host $host;.
Note:
ArcGIS Enterprise does not support SSL offloading through a reverse proxy/load balancer. If your configuration uses a reverse proxy, it must redirect to either the ArcGIS Web Adaptor or directly to the organization over HTTPS.
IP requirements
Planning your cluster network in advance is essential for ensuring a successful deployment, appropriate scaling requirements, and the ability to upgrade. ArcGIS Enterprise on Kubernetes initially deploys 47-66 pods, depending on the architecture profile. The number of pods will increase as additional capabilities are added, the deployment is scaled, and during the upgrading process.
Each pod is assigned a unique IP address, and depending on the cluster network configuration, pods can either get their IP addresses from a logically different address space from that of the host network (an overlay network) or from the host subnet. For example, if you configure your cluster to use Kubenet in Azure (default), pods will receive an IP address from a logically different address space and will be able to reach Azure resources using NAT.
Kubernetes supports Container Network Interface (CNI) and platforms such as AKS and EKS, which use platform-specific CNI plug-ins for cluster networking. For example, EKS clusters use Virtual Private Cloud (VPC) CNI by default. If the cluster is configured with a CNI plug-in, pods will receive IP addresses from the host subnet and a corresponding pool of IPs available in the VPC/VNet.
If you do not have a sufficient number of IPs available in the host subnets, the deployment will either fail, or you will not be able to scale the deployment. For example, if an EKS cluster is configured with two subnets each and a /26 IPv4 address prefix (64 available IPv4 addresses each), there cannot be more than 126 IP addresses available for the pods. While you may be able to deploy ArcGIS Enterprise on Kubernetes in this cluster, you will not be able to scale the deployment to have 80 feature service pods, as this scaling requirement will exceed the number of IP addresses available.
System storage
ArcGIS Enterprise on Kubernetes requires persistent volumes (PVs) for system storage, which can be provisioned dynamically through a storage class or statically by an administrator prior to creating the organization. Learn more about static provisioning and dynamic provisioning.
Stateful workloads of ArcGIS Enterprise include relational database management systems and NoSQL databases. It is recommended that you provision PVs on block storage devices that provide low latency, such as EBS volumes when using EKS, Azure Disks when using AKS, Persistent Disks when using GKE, and vSphereVolume or Longhorn volumes when deploying to self-managed clusters.
Because these PVs store your organization's data and settings, you must protect them using restrictive security policies. For PVs based on network file storage, such as NFS, Azure Files, and GlusterFS, ensure that the permissions are set to prevent unauthorized access. For block storage, such as EBS, Azure Disk, vSphereVolume, and Longhorn volumes, ensure that access to the storage volumes is restricted to only those users who need it.
The following are descriptions of storage volumes and their intended purpose:
Note:
Persistent volume requirements are stated for 11.4 and may differ from prior versions.
- In-memory—Stores temporary system resources.
- Item packages—Stores large uploads and packages to support publishing workflows.
- Object—Stores uploaded and saved content, hosted tile, image, and scene layer caches, and geoprocessing output.
- Queue—Stores asynchronous geoprocessing jobs.
- Relational—Stores hosted feature data and administrative aspects such as customization and configuration settings. Two are required for deployment.
- Spatiotemporal and index—Stores logs and indexes as well as hosted feature data.
Note:
Spatiotemporal-hosted feature layers are not supported at this release.
- Usage metric data—Stores GIS service usage data.
Consider the storage requirements for your organization and define the size for each PV accordingly.
Client workstation
The deployment scripts are bash scripts that can be run from a remote client workstation. The user running the scripts must have read and write access for the scripts to write temporary resource files to subdirectories.
Note:
Due to known compatibility issues, Linux emulators are not supported to deploy ArcGIS Enterprise on Kubernetes.
The following operating systems have been tested and are supported to run the deployment script, configure script, and other packaged scripts and tools:
- Red Hat Enterprise Linux Server 9
- Red Hat Enterprise Linux Server 8
- AlmaLinux 9
- SUSE Linux Enterprise Server 15
- Ubuntu Server 24.04 LTS
- Ubuntu Server 22.04 LTS
- Oracle Linux 9
- Oracle Linux 8
- Rocky Linux 9
- Rocky Linux 8
While other untested operating systems may work, unforseen issues may arise when deploying and configuring the organization. If you encounter an issue with an operating system that is not listed above, it is recommended to provision a client workstation that matches the tested operating systems.
You need the following when setting up your client workstation (download links are provided):
- Kubectl
- An environment-specific command line interface (CLI)
Kubectl is a prerequisite to run the deployment script. Use Kubectl installation and setup to download the Kubernetes command line tool.
Note:
The kubectl client version must be within one minor release of the Kubernetes API server version. For example, kubectl 1.29 is compatible with Kubernetes cluster versions 1.29-1.30.
When managing your deployment, you can use environment-specific command line tools. Use the following links to download an environment-specific CLI:
TLS certificate
ArcGIS Enterprise on Kubernetes uses an NGINX-based ingress controller. This ingress controller is namespace scoped and is deployed to listen to only ingress traffic for the ArcGIS Enterprise namespace. A Transport Layer Security (TLS) certificate is required with the FQDN in the certificate common name and subject alternate name. Either a CA-signed certificate or a self-signed certificate can be used; however, for security reasons, a CA-signed certificate is recommended. This is the default TLS certificate for the ingress controller. The following certificate options are available in the deployment script to apply a TLS certificate for ingress traffic:
- An existing TLS secret that contains a private key and certificate
- A .pfx file that contains a private key and certificate
- A PEM-formatted private key and certificate
- A self-signed certificate
ArcGIS Enterprise on Kubernetes supports using a TLS certificate for the ingress controller that is issued and managed by Kubernetes cert-manager. This certificate must be stored in a TLS secret in the same namespace as ArcGIS Enterprise. The TLS secret can then be referenced either during deployment or after the ArcGIS Enterprise organization is created.
ArcGIS Pro
- ArcGIS Pro 3.4 is the companion release for ArcGIS Enterprise on Kubernetes 11.4. To benefit from the latest features available, use ArcGIS Pro 3.4.
- To publish services to ArcGIS Enterprise on Kubernetes, ArcGIS Pro 2.8 or later is required.
- To consume services from ArcGIS Enterprise on Kubernetes, ArcGIS Pro 2.7 or later is required.
When registering a data store item from an enterprise geodatabase, the geodatabase version must be 10.9.0.2.8 or later.
Note:
To benefit from the latest features available, upgrade your geodatabase version to 11.4.0.For more information, review Client and geodatabase compatibility.