Manage service deployments

ArcGIS Enterprise on Kubernetes is composed of many microservices that work together to accommodate and support software features and workload. These microservices are implemented as one or more Kubernetes deployments that are instantiated as pods in your organization.

Administrators can use ArcGIS Enterprise Manager or ArcGIS Enterprise Administrator API to manually scale service deployments horizontally by adjusting the number of pods and vertically by adjusting the memory and CPU. For example, increasing the number of pods can improve availability to the organization, as pods are spread across multiple nodes, reducing the chance of failure.

Additionally, administrators can enable auto scaling horizontally by setting a threshold for CPU or memory, which alleviates the need to administer those services manually.

The default values for service deployments vary based on service type. These values are configured to run multiple pods to improve overall availability and throughput. However, in some cases, a single pod with added resources can be as or more efficient. Service deployments are flexible and they allow adjustment in both dimensions.

Note:

The Kubernetes cluster on which your organization is deployed has a finite number of computer nodes. By scaling many GIS services manually or automatically, your organization may reach the limit of computer resources allotted to ArcGIS Enterprise on Kubernetes. If this occurs, work with your IT administrator to add more nodes to the Kubernetes cluster. Consider using cluster autoscaler as a solution for this in your environment.

To monitor the health, status, and use of your organization's service pods, use the overview settings page and service usage statistics. Using service usage statistics, you can measure the response times of your web services along with failure and timeout rates. These metrics can help you understand the overall performance of your services and provide the necessary inputs to determine if any of the service pods need to be adjusted with more or fewer resources.

Note:

Service usage statistics do not provide the CPU and memory use on a per-service pod basis. The deployment's role-based access control prohibits the collection of such metrics. As an alternative, you can use external monitoring tools with privileges to collect system-level metrics in addition to those metrics that are available.

For example, you can query service usage statistics periodically and when a specified threshold is met, invoke ArcGIS Enterprise Administrator API to adjust resources accordingly, giving you complete control to scale your service pods.

In addition, use ArcGIS Enterprise Manager to allocate service deployment resources to scale the number of pods, set resource limits, and stop and start services.

The Services page in ArcGIS Enterprise Manager contains the following three tabs that categorize the service deployment types:

  • GIS services
  • System services
  • Utility services

GIS services

GIS services enable your organization's geospatial capabilities. GIS services include map, feature, and geocode services as well as hosted map and feature services. Hosted services are published using system-managed data stores. These services are located in the Hosted folder.

SampleWorldCities is provided as a default map service once the organization is created. You can use this map service to test and preview functionality of a service from your organization's maps and apps.

GIS services that reference user-managed data stores require an active connection to a registered data store. Services using hosted data connect to system-managed data stores.

GIS services can be configured to run in shared or dedicated mode.

System services

System services are tools that help run the GIS services in your organization. For example, the PublishingTools service publishes data as web services. Many system services are started when the organization is created; however, some must be manually started including ReportingTools, SceneCachingControllers, and SceneCachingTools.

System services run in dedicated mode.

Utility services

Utility services enable specific functionality in your organization, for example, printing maps, locating addresses, calculating areas, finding directions, and performing analysis. Some utility services include default services, but you can also use your own services. To learn how to configure your organization to use utility services, see Configure utility services.

Utility services run in dedicated mode.

Scale service deployments

To scale a service deployment, follow these steps:

  1. Sign in to ArcGIS Enterprise Manager as an administrator.
  2. Click the Services button.

    The services page appears. On this page, service deployments are organized on their service type tabs: GIS services, System services, and Utility services.

  3. Click the appropriate services tab and select the service deployment to scale or manage.

    The Overview page provides an overview of the service deployment and includes the current status and number of pods started. Additionally, GIS services indicate which mode the service is running.

  4. On the Settings page, optionally provide new values for Number of pods, Resource limits, and Service time. Optionally, turn services on and off by setting Start and Stop values on this page as well.
  5. Click Save.

Enable auto scaling

To set auto scaling for a service deployment, follow these steps:

  1. Sign in to ArcGIS Enterprise Manager as an administrator.
  2. Click the Services button.

    The services page appears. On this page, service deployments are organized on their service type tabs: GIS services, System services, and Utility services.

  3. Click the appropriate services tab and select the service deployment to scale or manage.

    The Overview page provides an overview of the service deployment and includes the current status and number of pods started. Additionally, GIS services indicate which mode the service is running.

  4. On the Settings page in the Scaling section, enable Auto scaling.
  5. Provide new values for the various auto scaling parameters:
    • Minimum number of pods—The minimum number of pods that are allocated to run for a service.
    • Maximum number of pods—The maximum number of pods that are allocated to run for a service.
    • Set threshold—Threshold for CPU and memory utilization. This value is used to determine when pods must be scaled up or down. Utilization is averaged across all running pods for a service deployment and is expressed as a percentage of the resource requests for CPU and memory.
      Note:

      CPU is commonly used to determine auto scaling requirements.

    You can turn services on and off by setting Start and Stop values on this page as well.

  6. Click Save.

Setting scaling values

Several factors must be considered when setting scaling values. Consider the following general recommendations when determining an appropriate values for a service:

  • Since the value you specify for CPU is a percentage on your CPU requests, evaluate your CPU requests and limits which are expressed as min/max for your service deployment.

  • The default value for CPU requests set on service deployments is relatively low by default. This is done intentionally so that the initial overall footprint is smaller, and there is a lower initial cost of ownership. The CPU request values may not represent the load on your service or your typical CPU utilization. It is recommended that you identify the typical usage patterns of your services in terms of CPU and increase the CPU requests to a percentage of this spectrum of usage. This makes the percentage value you set more realistic.

  • You can edit scaling parameters and customize them further, for example, by editing behaviors and policies, in the ArcGIS Enterprise Administrator API.