Service scaling

As traffic patterns and user demands on your GIS services change, you can adjust the resources available to your services.

Service scaling examples

To meet performance demands while conserving resources used by your organization, it's important to understand when and how to scale the resources available to your services. The following examples are hypothetical scenarios in which organization administrators need to consider scaling their resources:

  • A web map in a public organization is suddenly receiving high traffic volume and users are experiencing performance delays. The organization administrator views the system logs and determines that a map service used by the web map is overburdened. First, they may change the service mode from using shared resources to using dedicated resources. Next, they can increase pod replicas for that service's deployment. By providing dedicated resources for the map service, the administrator ensures that the high traffic for the service is handled without performance issues.
  • A surveying company has accumulated hundreds of feature services in their organization. All of them are set to shared mode, so there is one service deployment supporting them. No service receives high traffic, but the overall use of the organization's GIS content is burdening the service deployment. The organization administrator increases the number of pod replicas in the service deployment. With more shared instances running, traffic to the organization's many feature services is adequately handled.
  • During a content migration project, a city government's GIS organization is republishing many web maps and web layers to their organization. Due to time constraints, they want to complete this quickly. Because publishing the services underlying the web maps and web layers is performed by the PublishingTools utility service, the machine resources available to that utility service determines how quickly publishing can occur. The organization administrator increases the pod replicas in the PublishingTools service deployment temporarily to improve publishing efficiency during the project. After the project is complete, they decrease the pod replicas in the service deployment to conserve machine resources.

Service scaling options

You have two primary options to scale services:

Adjust the service mode

If a map or feature service that's using shared resources is receiving constant traffic, you can switch its instance type to use dedicated resources. This opens a new service resource pool that is dedicated to that service.

Reallocate system resources

You can scale the number of pods assigned to a service deployment using ArcGIS Enterprise Manager. This option is useful when the number of dedicated resources serving the service is inadequate and users are experiencing performance delays.

This increases the number of pod replicas for the deployment. When you increase the number of pods available for a service, the Kubernetes cluster produces additional replicas of the service deployment's existing pods, including their service configuration and service instances.

This also increases the availability and total throughput of instances for the service, as well as the memory and CPU consumption of the service. Because you're scaling your Kubernetes infrastructure, this option is fault tolerant; pods that fail are automatically restored without affecting other pods.

Note:

The Kubernetes cluster on which your organization is deployed has a finite number of computer nodes. By scaling many GIS services, your organization may reach the limit of computer resources allotted to ArcGIS Enterprise on Kubernetes. If this occurs, work with your IT administrator to add more nodes to the Kubernetes cluster.