Configuring machine autoscaling

If you configured the cluster to adjust the number of nodes depending on resource needs, you need additional configuration to maintain the seamless operation of Che workspaces.

Workspaces need special consideration when the autoscaler adds and removes nodes.

When a new node is being added by the autoscaler, workspace startup can take longer than usual until the node provisioning is complete.

Conversely when a node is being removed, ideally nodes that are running workspace pods should not be evicted by the autoscaler to avoid any interruptions while using the workspace and potentially losing any unsaved data.

When the autoscaler adds a new node

You need to make additional configurations to the Che installation to ensure proper workspace startup while a new node is being added.

Procedure
  1. In the CheCluster Custom Resource, set the following fields to allow proper workspace startup when the autoscaler is provisioning a new node.

    spec:
      devEnvironments:
        startTimeoutSeconds: 600 (1)
        ignoredUnrecoverableEvents: (2)
          - FailedScheduling
    1 Set to at least 600 seconds to allow time for a new node to be provisioned during workspace startup.
    2 Ignore the FailedScheduling event to allow workspace startup to continue when a new node is provisioned.

When the autoscaler removes a node

To prevent workspace pods from being evicted when the autoscaler needs to remove a node, add the "cluster-autoscaler.kubernetes.io/safe-to-evict": "false" annotation to every workspace pod.

Procedure
  1. In the CheCluster Custom Resource, add the cluster-autoscaler.kubernetes.io/safe-to-evict: "false" annotation in the spec.devEnvironments.workspacesPodAnnotations field.

    spec:
      devEnvironments:
        workspacesPodAnnotations:
          cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
Verification steps
  1. Start a workspace and verify that the workspace pod contains the cluster-autoscaler.kubernetes.io/safe-to-evict: "false" annotation.

    $ kubectl get pod <workspace_pod_name> -o jsonpath='{.metadata.annotations.cluster-autoscaler\.kubernetes\.io/safe-to-evict}'
    false