The minimum hardware and infrastructure required to run ArcGIS Enterprise on Kubernetes at 10.9 are described below.
Supported environments
System requirements and specifications apply across all supported environments except where noted. For this release, the following environments are supported:
- On-premises data center
- Red Hat OpenShift Container Platform 4.6 or later
- Managed Kubernetes services on the cloud
- Microsoft Azure Kubernetes Service (AKS)
- Amazon Elastic Kubernetes Service (EKS)
Container registry
Container images for ArcGIS Enterprise on Kubernetes are accessible from a private Docker Hub repository. Esri will provide access to this repository to those who are deploying ArcGIS Enterprise on Kubernetes. Talk to your Esri representative for details.
Esri licenses
To authorize your ArcGIS Enterprise organization during deployment, you need a user type license file in JSON format (.json file) and a server license file in ECP or PRVC format (.ecp or .prvc file). To obtain these license files, visit My Esri with privileges to 'take licensing action'.
- Sign in to My Esri.
- Select My Organizations > Licensing.
- Under License Esri Products click Start.
- For Product, select ArcGIS Enterprise; for Version, select the desired ArcGIS Enterprise version; and from the License type list, proceed through the steps to generate license files for ArcGIS Server and Portal for ArcGIS, including your server roles, user types and applications, as applicable.
Kubernetes cluster
To deploy ArcGIS Enterprise on Kubernetes, you must have a Kubernetes cluster on one of the platforms mentioned above. For each supported environment, the Kubernetes cluster must be version 1.19 or later.
Namespace
ArcGIS Enterprise on Kubernetes requires its own dedicated namespace. The namespace must be created before running the deployment script. Each deployment of ArcGIS Enterprise on Kubernetes also requires a dedicated namespace.
Compute
ArcGIS Enterprise on Kubernetes is deployed with one of three architecture profiles. Recommendations for resource (CPU and memory) requests and limits and overall compute requirements vary based on the selected profile. Recommendations for each profile are provided below.
The following are the minimum node requirements for each architecture profile. It is recommended that each worker/agent node have a minimum of 8 CPU and 32 GiB of memory.
Architecture profile | Minimum worker/agent nodes | Total minimum CPU | Total minimum GiB |
---|---|---|---|
Standard availability | 3 | 24 | 96 |
Enhanced availability | 4 | 32 | 128 |
Development | 2 | 16 | 64 |
Note:
ArcGIS Enterprise on Kubernetes is only supported on CPUs that adhere to the x86_64 architecture (64 bit).
The pods in the ArcGIS Enterprise on Kubernetes deployment are distributed across the worker nodes in the cluster. When scaling the deployment or adding another ArcGIS Enterprise deployment to the cluster, you need to provision hardware accordingly. This may require an increase in the default maximum number of pods per node. The number of pods that are initially created varies with each architecture profile. As you scale horizontally or add new functionality, the number of pods increases.
Resource Quota
ArcGIS Enterprise on Kubernetes pods have defined requests and limits for CPU and memory. If the namespace has a ResourceQuota object, the quota must be higher than the sum of all the pods' requests and limits. These values vary based on the architecture profile you've selected, as described below.
It is recommended that you set aside at least 10% of request resources for proper functioning of the cluster nodes.
The following quota recommendations for each profile are below based on the above set asides. The limit values depicted are placeholders and must be configured based on your actual scalability requirements.
Standard availability profile:
spec:
hard:
limits.cpu: "120"
limits.memory: 196Gi
requests.cpu: "22"
requests.memory: 86Gi
Enhanced availability profile:
spec:
hard:
limits.cpu: "132"
limits.memory: 256Gi
requests.cpu: "28"
requests.memory: 108Gi
Development profile:
spec:
hard:
limits.cpu: "86"
limits.memory: 154Gi
requests.cpu: "14"
requests.memory: 58Gi
Security
The security requirements for ArcGIS Enterprise on Kubernetes are described below.
Role-based access control
Role-based access control (RBAC) must be enabled on the Kubernetes cluster. To deploy ArcGIS Enterprise on Kubernetes, you do not need cluster-admin privileges. If you do not have cluster-admin privileges, the user must have minimum namespace administrative privileges. You can assign defaultClusterRole "admin" to the user by creating a RoleBinding in the namespace.
Pod security policy (security context constraints in OpenShift) and virtual memory
ArcGIS Enterprise on Kubernetes deploys Elasticsearch to support various features of the ArcGIS Enterprise organization. By default, Elasticsearch uses the mmapfs directory to store required indices. The default operating system limits on mmap counts may be insufficient for deployment. Elasticsearch recommends a default vm.max_map_count value of 262144. To change the default value, an elevated (root) privilege is required on each node.
Depending on whether the Kubernetes cluster allows for containers to run as privileged or unprivileged, the following actions are required.
- Run as privileged—ArcGIS Enterprise on Kubernetes runs a privileged init container on the node running Elasticsearch and no additional action is needed.
- Run as unprivileged—If the Kubernetes cluster has pod security defined and does not allow for containers to run as privileged, the following options apply:
- Option 1—The ArcGIS Enterprise on Kubernetes deployment script creates a service account under its namespace to run containers. The default service account is arcgis-admin-serviceaccount. If the cluster includes a pod security policy, you must allow the service account to run containers as privileged. For OpenShift, you can grant this
service account access to the privileged security context
constraints by
adding the following in the
user section.
“-system:serviceaccount: <Namespace>:arcgis-admin-serviceaccount"
- Option 2—If you cannot grant the service account to run as a privileged container, you must prepare each node manually by running the following command as root:
sysctl -w vm.max_map_count=262144
- Option 1—The ArcGIS Enterprise on Kubernetes deployment script creates a service account under its namespace to run containers. The default service account is arcgis-admin-serviceaccount. If the cluster includes a pod security policy, you must allow the service account to run containers as privileged. For OpenShift, you can grant this
service account access to the privileged security context
constraints by
adding the following in the
user section.
After preparing the nodes, you must instruct the deployment script not to run privileged containers as follows:
- Navigate to the /setup/.install/arcgis-enterprise/arcgis-enterprise.properties file.
- Update the "ALLOWED_PRIVILEGED_CONTAINERS=${ALLOWED_PRIVILEGED_CONTAINERS:-true}" string to "ALLOWED_PRIVILEGED_CONTAINERS=${ALLOWED_PRIVILEGED_CONTAINERS:-false}".
- Save the file.
The deployment script will not run the init container as privileged.
If you use Kubernetes NetworkPolicies, ensure that uninterrupted pod-to-pod and pod-to-service communication is allowed n the ArcGIS Enterprise namespace.
In addition, ensure that the pods in the namespace have access to the Kubernetes API server. The API server is accessible through a service named Kubernetes in the default namespace. Pods in the default namespace use the kubernetes.default.svc hostname to query the API server.
Network
Network requirements include a fully qualified domain name and load balancer. Details are provided for each.
Fully qualified domain name
ArcGIS Enterprise on Kubernetes requires a fully qualified domain name (FQDN) (for example, map.company.com). You can use your existing domain name system (DNS) to create one or use a Cloud DNS service such as Amazon Route 53. You can create the DNS record after deployment, however, you must provide its value during deployment. At this release, the FQDN cannot be modified after deployment.
Load balancer
A load balancer is required to direct traffic across each worker node. When using AKS or EKS, you can provision the following load balancers from the deployment script without any manual configuration:
- Azure Load Balancer (public or internal)—A preprovisioned static public IP address and DNS label can be specified in the deployment script.
AWS Network and Classic Load Balancer (internet-facing or internal)—Other load balancing services can be used; however, they must be configured manually with each cluster node.
In an OpenShift Container Platform, routes can be configured when pointing to the ingress controller service.
You can use a self-managed load balancer pointing to the worker nodes on the ingress controller service's NodePort. For details, see the deployment guide's parameter description for load balancer.
Note:
Azure CNI plugin on AKS is not supported at this release.Storage
ArcGIS Enterprise on Kubernetes requires persistent volumes (PVs) for system storage. They can be provisioned as dynamic or static. When creating PVs of either type, you can use custom sizes (larger size) and labels. Stateful workloads of ArcGIS Enterprise include relational database management systems as well as NoSQL databases. It is recommended to use block storage devices which provide low latency such as EBS volumes, Azure Disks, vSphereVolume, etc.
Because these persistent volumes store data and settings, they should be protected using restrictive security policies. For persistent volumes based on file based storage, such as NFS, Azure File, or Gluster, ensure that the permissions to the directories are set to prevent unauthorized access. For block storage, such as EBS volumes, Azure Disk, and iSCSI, ensure that the block devices are limited to only those users needing access.
The following are descriptions of storage volumes and their intended purpose:
- In-memory—Stores temporary system resources.
- Item packages—Stores large uploads and packages to support publishing workflows.
- Object—Stores uploaded and saved content, hosted tile and image layer caches, and geoprocessing output. Four are required for deployment.
- Queue—Stores asynchronous geoprocessing jobs.
- Relational—Stores hosted feature data and administrative aspects such as customization and configuration settings. Two are required for deployment.
- Spatiotemporal and index—Storage for logs and indexes as well as hosted feature data to support real-time and big data visualization and analytics.
- Usage metric viewer—Stores default and custom dashboards that display GIS service usage.
- Usage metric data—Stores GIS service usage data.
Consider the storage requirements for your organization's needs and define the size for each PV accordingly.
Static PVs
If you're provisioning static PVs prior to deployment, the following specifications and labels are recommended.
The number of PVs required for each architecture profile are provided.
Volume | Development profile | Standard availability profile | Enhanced availability profile |
---|---|---|---|
in-memory-volume | 1 | 1 | 1 |
item-packages-volume | 1 | 2 | 2 |
object-volume | 1 | 4 | 12 |
queue-volume | 1 | 2 | 2 |
relational-volume | 2 | 2 | 2 |
spatiotemporal-and-index-volume | 1 | 3 | 5 |
usage-metric-volume | 1 | 1 | 1 |
usage-metric-viewer-volume | 1 | 1 | 1 |
When configuring an organization with the setup wizard, the following specifications (volume name, size, app and tier) are used for volume binding; however, you can customize them as needed.
Volume | Size in GiB (minimum) | Access mode | Label (default) |
---|---|---|---|
in-memory-volume | 16 | ReadWriteOnce | arcgis/tier=storage, arcgis/app=ignite |
item-packages-volume | 16 | ReadWriteOnce | arcgis/tier=api, arcgis/app=sharing |
object-volume | 16 | ReadWriteOnce | arcgis/tier=storage, arcgis/app=minio |
queue-volume | 16 | ReadWriteOnce | arcgis/tier=queue, arcgis/app=rabbitmq |
relational-volume | 16 | ReadWriteOnce | arcgis/tier=storage, arcgis/app=postgres |
spatiotemporal-and-index-volume | 16 | ReadWriteOnce | arcgis/tier=storage, arcgis/app=elasticsearch |
usage-metric-volume | 30 | ReadWriteOnce | arcgis/tier=storage, arcgis/app=prometheus |
usage-metric-viewer-volume | 1 | ReadWriteOnce | arcgis/tier=storage, arcgis/app=grafana |
Dynamic PVs
For dynamic provisioning, a StorageClass is required. When configuring the organization with the setup wizard, the default StorageClass name is arcgis-storage-default.
The reclaimPolicy parameter on the StorageClass must be set to retain.
Note:
Not all VM types support premium disks in Azure. Use a premium disk when the VM type supports it.- For AKS, the following is an example of a StorageClass definition with Premium Azure Disk:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: arcgis-storage-default provisioner: kubernetes.io/azure-disk parameters: kind: Managed storageaccounttype: Premium_LRS reclaimPolicy: Retain allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer
- For EKS, the following is an example of a StorageClass definition with GP2 type EBS volumes:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: arcgis-storage-default provisioner: kubernetes.io/aws-ebs parameters: fsType: ext4 type: gp2 reclaimPolicy: Retain allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer
You can also use the default storage classes provided with an AKS or EKS cluster. In AKS, these are default (azure disk) or managed-premium storage class. In EKS, this is a GP2 storage class.
Client workstation
The deployment scripts are bash scripts that can be run from a remote client workstation with bash shell.
You need the following when setting up your client workstation (download links are provided):
- Kubectl
- An environment-specific command line interface (CLI)
Kubectl is a prerequisite to run the deployment script. Use Kubectl installation and setup to download the Kubernetescommand line tool.
When managing your deployment, you can use environment-specific command line tools. Use the following links to download an environment-specific CLI:
TLS certificate
ArcGIS Enterprise on Kubernetes uses an NGINX-based ingress controller. This ingress controller is namespace scoped and is deployed to listen to only ingress traffic for the ArcGIS Enterprise namespace. A TLS is required with the FQDN in the certificate common name and subject alternate name. Either a CA-signed certificate or a self-signed certificate can be used, but for security reasons, a CA-signed certificate is strongly recommended.This is the default TLS certificate for the ingress controller. The following certificate options are available in the deployment script to apply a TLS certificate for ingress traffic:
- An existing TLS secret that contains a private key and certificate
- A .pfx file that contains a private key and certificate
- A PEM-formatted private key and certificate
- A self-signed certificate
ArcGIS Enterprise on Kubernetes supports using a TLS certificate for the ingress controller that is issued and managed by Kubernetes cert-manager. This certificate must be stored in a TLS secret in the same namespace as ArcGIS Enterprise. The TLS secret can then be referenced either during deployment or after the ArcGIS Enterprise organization is created.
ArcGIS Enterprise on Kubernetes and ArcGIS Pro
ArcGIS Pro 2.7 or later is required to consume services from ArcGIS Enterprise on Kubernetes. Prior versions are not supported.
ArcGIS Pro 2.8 or later is required to publish services to ArcGIS Enterprise on Kubernetes.
When registering a data store item from an enterprise geodatabase, the geodatabase version must be 10.9.0.2.8 or later. The geodatabase version number is a combination of ArcGIS Enterprise and ArcGIS Pro release numbers.
Enterprise geodatabases created from ArcMap cannot be registered as data items.