Deprovisioning

Understand different ways Karpenter deprovisions nodes

Control Flow

Karpenter sets a Kubernetes finalizer on each node it provisions. The finalizer blocks deletion of the node object while the Termination Controller cordons and drains the node, before removing the underlying machine. Deprovisioning is triggered by the Deprovisioning Controller, by the user through manual deprovisioning, or through an external system that sends a delete request to the node object.

Deprovisioning Controller

Karpenter automatically discovers deprovisionable nodes and spins up replacements when needed. Karpenter deprovisions nodes by executing one automatic method at a time, in order of Expiration, Drift, Emptiness, and then Consolidation. Each method varies slightly but they all follow the standard deprovisioning process:

  1. Identify a list of prioritized candidates for the deprovisioning method.
    • If there are pods that cannot be evicted on the node, Karpenter will ignore the node and try deprovisioning it later.
    • If there are no deprovisionable nodes, continue to the next deprovisioning method.
  2. For each deprovisionable node, execute a scheduling simulation with the pods on the node to find if any replacement nodes are needed.
  3. Cordon the node(s) to prevent pods from scheduling to it.
  4. Pre-spin any replacement nodes needed as calculated in Step (2), and wait for them to become ready.
    • If a replacement node fails to initialize, un-cordon the node(s), and restart from Step (1), starting at the first deprovisioning method again.
  5. Delete the node(s) and wait for the Termination Controller to gracefully shutdown the node(s).
  6. Once the Termination Controller terminates the node, go back to Step (1), starting at the the first deprovisioning method again.

Termination Controller

When a Karpenter node is deleted, the Karpenter finalizer will block deletion and the APIServer will set the DeletionTimestamp on the node, allowing Karpenter to gracefully shutdown the node, modeled after K8s Graceful Node Shutdown. Karpenter’s graceful shutdown process will:

  1. Cordon the node to prevent pods from scheduling to it.
  2. Begin evicting the pods on the node with the K8s Eviction API to respect PDBs, while ignoring all non-daemonset pods and static pods. Wait for the node to be fully drained before proceeding to Step (3).
    • While waiting, if the underlying machine for the node no longer exists, remove the finalizer to allow the APIServer to delete the node, completing termination.
  3. Terminate the machine in the Cloud Provider.
  4. Remove the finalizer from the node to allow the APIServer to delete the node, completing termination.

Methods

There are both automated and manual ways of deprovisioning nodes provisioned by Karpenter:

Manual Methods

  • Node Deletion: You could use kubectl to manually remove a single Karpenter node:

    # Delete a specific node
    kubectl delete node $NODE_NAME
    
    # Delete all nodes owned any provisioner
    kubectl delete nodes -l karpenter.sh/provisioner-name
    
    # Delete all nodes owned by a specific provisioner
    kubectl delete nodes -l karpenter.sh/provisioner-name=$PROVISIONER_NAME
    
  • Provisioner Deletion: Nodes are owned by the Provisioner through an owner reference that launched them. Karpenter will gracefully terminate nodes through cascading deletion when the owning provisioner is deleted.

Automated Methods

  • Emptiness: Karpenter notes when the last workload (non-daemonset) pod stops running on a node. From that point, Karpenter waits the number of seconds set by ttlSecondsAfterEmpty in the provisioner, then Karpenter requests to delete the node. This feature can keep costs down by removing nodes that are no longer being used for workloads.
  • Expiration: Karpenter will annotate nodes as expired and deprovision nodes after they have lived a set number of seconds, based on the provisioner ttlSecondsUntilExpired value. One use case for node expiry is to periodically recycle nodes. Old nodes (with a potentially outdated Kubernetes version or operating system) are deleted, and replaced with nodes on the current version (assuming that you requested the latest version, rather than a specific version).
  • Consolidation: Karpenter works to actively reduce cluster cost by identifying when:
    • Nodes can be removed as their workloads will run on other nodes in the cluster.
    • Nodes can be replaced with cheaper variants due to a change in the workloads.
  • Drift: Karpenter will annotate nodes as drifted and deprovision nodes that have drifted from their desired specification. See Drift to see which fields are considered.
  • Interruption: If enabled, Karpenter will watch for upcoming involuntary interruption events that could affect your nodes (health events, spot interruption, etc.) and will cordon, drain, and terminate the node(s) ahead of the event to reduce workload disruption.

Consolidation

Karpenter has two mechanisms for cluster consolidation:

  • Deletion - A node is eligible for deletion if all of its pods can run on free capacity of other nodes in the cluster.
  • Replace - A node can be replaced if all of its pods can run on a combination of free capacity of other nodes in the cluster and a single cheaper replacement node.

Consolidation has three mechanisms that are performed in order to attempt to identify a consolidation action:

  1. Empty Node Consolidation - Delete any entirely empty nodes in parallel
  2. Multi-Node Consolidation - Try to delete two or more nodes in parallel, possibly launching a single replacement that is cheaper than the price of all nodes being removed
  3. Single-Node Consolidation - Try to delete any single node, possibly launching a single replacement that is cheaper than the price of that node

It’s impractical to examine all possible consolidation options for multi-node consolidation, so Karpenter uses a heuristic to identify a likely set of nodes that can be consolidated. For single-node consolidation we consider each node in the cluster individually.

When there are multiple nodes that could be potentially deleted or replaced, Karpenter choose to consolidate the node that overall disrupts your workloads the least by preferring to terminate:

  • nodes running fewer pods
  • nodes that will expire soon
  • nodes with lower priority pods

If consolidation is enabled, Karpenter periodically reports events against nodes that indicate why the node can’t be consolidated. These events can be used to investigate nodes that you expect to have been consolidated, but still remain in your cluster.

Events:
  Type     Reason                   Age                From             Message
  ----     ------                   ----               ----             -------
  Normal   Unconsolidatable         66s                karpenter        pdb default/inflate-pdb prevents pod evictions
  Normal   Unconsolidatable         33s (x3 over 30m)  karpenter        can't replace with a cheaper node

Interruption

If interruption-handling is enabled, Karpenter will watch for upcoming involuntary interruption events that would cause disruption to your workloads. These interruption events include:

  • Spot Interruption Warnings
  • Scheduled Change Health Events (Maintenance Events)
  • Instance Terminating Events
  • Instance Stopping Events

When Karpenter detects one of these events will occur to your nodes, it automatically cordons, drains, and terminates the node(s) ahead of the interruption event to give the maximum amount of time for workload cleanup prior to compute disruption. This enables scenarios where the terminationGracePeriod for your workloads may be long or cleanup for your workloads is critical, and you want enough time to be able to gracefully clean-up your pods.

For Spot interruptions, the provisioner will start a new machine as soon as it sees the Spot interruption warning. Spot interruptions have a 2 minute notice before Amazon EC2 reclaims the instance. Karpenter’s average node startup time means that, generally, there is sufficient time for the new node to become ready and to move the pods to the new node before the machine is reclaimed.

Karpenter enables this feature by watching an SQS queue which receives critical events from AWS services which may affect your nodes. Karpenter requires that an SQS queue be provisioned and EventBridge rules and targets be added that forward interruption events from AWS services to the SQS queue. Karpenter provides details for provisioning this infrastructure in the CloudFormation template in the Getting Started Guide.

To enable the interruption handling feature flag, configure the karpenter-global-settings ConfigMap with the following value mapped to the name of the interruption queue that handles interruption events.

apiVersion: v1
kind: ConfigMap
metadata:
  name: karpenter-global-settings
  namespace: karpenter
data:
  ...
  aws.interruptionQueueName: karpenter-cluster
  ...

Drift

Karpenter Drift will classify each CRD field as a (1) Static, (2) Dynamic, or (3) Behavioral field and will treat them differently. Static Drift will be a one-way reconciliation, triggered only by CRD changes. Dynamic Drift will be a two-way reconciliation, triggered by machine/node/instance changes and Provisioner or AWSNodetemplate changes.

  1. For Static Fields, values in the CRDs are reflected in the machine in the same way that they’re set. A machine will be detected as drifted if the values in the CRDs do not match the values in the machine.

  2. Dynamic Fields can correspond to multiple values and must be handled differently. Dynamic fields can create cases where drift occurs without changes to CRDs, or where CRD changes do not result in drift. For example, if a machine has node.kubernetes.io/instance-type: m5.large, and requirements change from node.kubernetes.io/instance-type In [m5.large] to node.kubernetes.io/instance-type In [m5.large, m5.2xlarge], the machine will not be drifted because it’s value is still compatible with the new requirements. Conversely, for an AWS Installation, if a machine is using a machine image ami: ami-abc, but a new image is published, Karpenter’s AWSNodeTemplate.amiSelector will discover that the new correct value is ami: ami-xyz, and detect the machine as drifted.

  3. Behavioral Fields are treated as over-arching settings on the Provisioner to dictate how Karpenter behaves. These fields don’t correspond to settings on the machine or instance. They’re set by the user to control Karpenter’s Provisioning and Deprovisioning logic. Since these don’t map to a desired state of machines, these fields will not be considered for Drift.

Read the Drift Design for more.

Provisioner Fields Static Dynamic Behavioral Implemented
Startup Taints x
Taints x
Labels x
Annotations x
Node Requirements x
Kubelet Configuration x
Weight x NA
Limits x NA
Consolidation x NA
TTLSecondsUntilExpired x NA
TTLSecondsAfterEmpty x NA
AWSNodeTemplate Fields Static Dynamic Behavioral Implemented
Subnet Selector x x
Security Group Selector x x
Instance Profile x
AMI Family/AMI Selector x x
UserData x
Tags x
Metadata Options x
Block Device Mappings x
Detailed Monitoring x

To enable the drift feature flag, refer to the Settings Feature Gates.

Karpenter will annotate the nodes with the karpenter.sh/voluntary-disruption: "drifted" if the node is drifted, and does not have the annotation,

Karpenter will remove the karpenter.sh/voluntary-disruption: "drifted" annotation for the following these scenarios:

  1. The featureGates.driftEnabled is not enabled but the node is drifted, karpenter will remove the annotation so another disruption controller can annotate the node.
  2. The node isn’t drifted, but has the annotation, karpenter will remove it.

If the node is marked as voluntarily disrupted by another controller, karpenter will do nothing.

Controls

Pod-Level Controls

You can block Karpenter from voluntarily choosing to disrupt certain pods by setting the karpenter.sh/do-not-evict: "true" annotation on the pod. This is useful for pods that you want to run from start to finish without disruption. By opting pods out of this disruption, you are telling Karpenter that it should not voluntarily remove a node containing this pod.

Examples of pods that you might want to opt-out of disruption include an interactive game that you don’t want to interrupt or a long batch job (such as you might have with machine learning) that would need to start over if it were interrupted.

apiVersion: apps/v1
kind: Deployment
spec:
  template:
    metadata:
      annotations:
        karpenter.sh/do-not-evict: "true"

Examples of voluntary node removal that will be prevented by this annotation include:

Node-Level Controls

Nodes can be opted out of consolidation deprovisioning by setting the annotation karpenter.sh/do-not-consolidate: "true" on the node.

apiVersion: v1
kind: Node
metadata:
  annotations:
    karpenter.sh/do-not-consolidate: "true"

Example: Disable Consolidation on Provisioner

Provisioner .spec.annotations allow you to set annotations that will be applied to all nodes launched by this provisioner. By setting the annotation karpenter.sh/do-not-consolidate: "true" on the provisioner, you will selectively prevent all nodes launched by this Provisioner from being considered in consolidation calculations.

apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
  name: default
spec:
  annotations: # will be applied to all nodes
    karpenter.sh/do-not-consolidate: "true"