In Kubernetes, newly created and unscheduled pods are automatically scheduled to nodes that meet their requirements. By using node affinity, taints, and tolerations, you can have more control over the nodes that pods are scheduled to. In ArcGIS Enterprise on Kubernetes, you can manage the placement of newly created GIS service pods from ArcGIS Enterprise Manager.
Node affinity allows you to specify rules that constrain pods to run on certain labelled nodes. Taints are applied to nodes to repel pods, while tolerations are applied to pods to tolerate taints. To learn more, refer to the Kubernetes documentation.
Combining node affinity, taints, and tolerations helps you achieve granular control over workload placement to enhance isolation, optimize resource allocation, and effectively meet compliance requirements within your Kubernetes cluster:
- Isolate workloads with specialized requirements—Use labels and node affinity rules to ensure that certain pods are scheduled on dedicated nodes. Use taints to mark nodes with specific characteristics, such as high CPU or memory requirements, as dedicated for ArcGIS workloads. Then apply tolerations on service pods to ensure they are scheduled on nodes with the required resources.
- Optimize resource allocation—Apply taints on nodes with limited resources to prevent resource overload, and define tolerations on service pods to match the resource constraints on these nodes. Combine node affinity with taints and tolerations to ensure that service pods are only scheduled on nodes that can meet their resource requirements.
- Geolocation-based scheduling—For applications that require data locality or adherence to specific regulations, use node affinity to schedule service pods based on the geographic location of nodes. Taint nodes based on their physical location or data sovereignty regulations, and apply tolerations on service pods to ensure they are scheduled on nodes and compliant with required location constraints.
Autoscaling enhances the use of node affinity and tolerations by dynamically adjusting the number of pods based on workload demands. This dynamic scaling ensures that pods are efficiently scheduled on nodes that meet specific requirements or have the necessary resources available, optimizing resource allocation. By combining autoscaling with node affinity and tolerations, Kubernetes clusters can achieve improved resource utilization, performance, and scalability—adapting to workload fluctuations while adhering to node constraints and preferences. To learn more about autoscaling, see Service scaling.
Scenarios
To better understand how managing pod placement on services can benefit your organization, review the following scenarios.
Scenario 1: Seasonal traffic surge for public mapping services
A public organization experiences a significant increase in traffic during a local festival. Users accessing the web map for event information experience delays due to high demand on the underlying map service. To address this, the organization administrator does the following:
- Configures nodes with high CPU and memory resources with the key-value pair high-performance: true.
- Taints high-performance nodes with workload=high-performance:NoSchedule.
- Applies node affinity rules to ensure that the map service pods are scheduled on nodes with high CPU and memory resources:
- Type—Preferred
- Key—high-performance
- Operator—Exists
- Value—true
- Applies tolerations to allow pods to run on nodes tainted for high-performance workloads, ensuring that the map service can handle the surge in traffic:
- Effect—NoSchedule
- Key—workload
- Operator—Equal
- Value—high-performance
Scenario 2: Data processing for environmental monitoring
An environmental agency is running a series of geospatial analyses to monitor changes in land use. The analysis requires significant computational resources, and the agency has dedicated nodes with GPUs for this purpose. To ensure that the geospatial analysis runs effectively without competing for resources with other services, the organization administrator:
- Configures GPU enabled nodes with the key-value pair gpu: true.
- Taints GPU nodes with workload=high-resource:NoSchedule to prevent less resource-intensive pods from being scheduled there.
- Applies node affinity rules to schedule the analysis pods on the GPU nodes:
- Type—Required
- Key—gpu
- Operator—In
- Value—true
- Applies tolerations to allow pods to run on the tainted GPU nodes:
- Effect—NoSchedule
- Key—workload
- Operator—Equal
- Value—high-resource
Scenario 3: Resource optimization for shared feature services
A city's GIS department has numerous feature services that are not heavily used but collectively burden a single service deployment. To allow the department to maintain service availability without overloading the system, the organization administrator:
- Configures nodes with the key resource-constrained.
- Taints resource-constrained nodes with resource-constrained:PreferNoSchedule.
- Applies node affinity rules to prioritize scheduling on nodes with lower resource availability:
- Type—Preferred
- Key—resource-constrained
- Operator—DoesNotExist
- Applies tolerations on feature service pods to ensure they can be scheduled on tainted nodes despite constraints:
- Effect—PreferNoSchedule
- Key—resource-constrained
- Operator—Exists
Manage pod placement
Before managing pod placement, configure node groups by adding the labels you will use to define node affinity and the taints you will set tolerations for.
Note:
In most environments, you can group your workloads using node pools or node groups. It is recommended to apply labels and taints to groups of nodes rather than individual nodes in this workflow.
To set node affinity rules and tolerations for newly created GIS service pods, complete the following steps:
- Sign in to ArcGIS Enterprise Manager as an administrator.
- Click the Services button on the sidebar.
- Select the GIS service that you want to manage, and click the Pod placement tab.
- To add a node affinity rule to pods, provide the following information in the Node affinity section and click Add:
- Type—The type of node affinity. The following are the available types:
- Preferred (PreferredDuringSchedulingIgnoredDuringExecution)—The pod prefers to be scheduled on a node that satisfies the rule.
- Required (RequiredDuringSchedulingIgnoredDuringExecution)—The pod must be scheduled on a node that satisfies the rule.
- Key—The key of the node label or annotation that the rule should match.
- Operator—The operator for the rule. The following are the available operators:
- In—The node label or annotation must be in the list of values specified.
- Not in—The node label or annotation must not be in the list of values specified.
- Exists—The node must have the specified label or annotation.
- Does not exist—The node must not have the specified label or annotation.
- Value—The list of values to match against the node label or annotation.
- Type—The type of node affinity. The following are the available types:
- To add tolerations to pods, provide the following information in the Tolerations section and click Add:
- Effect—The taint effect that the toleration should match. The following are the available effects:
- No schedule—New pods are not scheduled to the tainted node without a matching toleration.
- Prefer no schedule—New pods try to avoid being scheduled on the tainted node without a matching toleration, but it is not guaranteed.
- Key—The key of the taint that the toleration should match.
- Operator—The operator to use for the toleration. The following are the available operators:
- Equal—The pod tolerates a taint with the specified key and value.
- Exists—The pod tolerates any taint with the specified key.
- Value—The value of the taint that the toleration should match if the operation is set to equal.
- Effect—The taint effect that the toleration should match. The following are the available effects:
- Click Save.
Edit node affinity rules and tolerations by clicking the Edit button next to each listing or delete them by clicking the Delete button .
Note:
Pods already running on nodes will not be evicted if a change is made and the rule or toleration is no longer satisfied.