By Shunyao Li | Software Engineer, Cloud Runtime
Over the past three years, the Cloud Runtime team’s journey has gone from “Why Kubernetes?” to “How to scale?”. There is no doubt that Kubernetes based compute platform has achieved huge success at Pinterest. We have been supporting big data processing, machine learning, distributed training, workflow engine, CI/CD, internal tools — backing up every engineer at Pinterest.
Why Control Plane Latency Matters
As more and more business-critical workloads onboard Kubernetes, it is increasingly important to have a high-performance control plane that efficiently orchestrates every workload. Critical workloads such as content model training and ads reporting pipelines will be delayed if it takes too long to translate user workloads into Kubernetes native pods.
To measure control plane performance, we introduced top line business metrics through Service Level Indicator and Objective (SLI/SLO) in early 2021. We measure control plane SLI by reconcile latency, defined as the time from when a user change is received to when it propagates out of the control plane. For example, one of reconcile latency measurements for batch jobs is the delay between workload creation and Pod creation.
The initial SLO was set to 99%. At the time of writing this post, we are proudly serving a control plane SLO of 99.9%. This post is about how we improved the control plane to achieve high performance.
Control Plane in a Nutshell
Control plane is the nerve center of the Kubernetes platform and is responsible for workload orchestration. It listens to changes from the Kubernetes API, compares the desired state of resources with their actual status, and takes actions to make sure the actual resource status matches the desired status (reconciliation). Workload orchestration also includes making scheduling decisions about where to place workloads.
Kubernetes control plane consists of a set of resource controllers. Our resource controllers are written in the controller framework, which has an informer-reflector-cache architecture. Informers use the List-Watch mechanism to fetch and monitor resource changes from the Kubernetes API. Reflector updates cache with resource changes and dispatches events for handling. Cache stores resource objects and serve List and Get calls. The controller framework follows the producer-consumer pattern. The event handler is the producer and is responsible for queuing reconcile requests, while the controller worker pool is the consumer who pulls items from workqueue to run the reconciliation logic.
Figure 1: Kubernetes Controller Framework
Challenge 1: Worker Pool Efficiency
The controller worker pool is where the actual status to desired status reconciliation occurs. We leveraged the metrics provided by the workqueue package to gain a deep insight into the worker pool efficiency. These metrics are:
- Work duration: how long it takes to process an item from workqueue
- Queue duration: how long an item stays in workqueue before being processed
- Enqueue rate: how often an item gets enqueued
- Retry rate: how often an item gets retried
- Queue depth: current depth of workqueue
Among these metrics, queue depth draws our attention as its spikes highly correlate with control plane performance degradation. Spikes in queue depth indicate head-of-line blocking. This usually happens when a large number of irrelevant items are enqueued in a short period of time. For those items that really need to be reconciled, they end up waiting in the queue for a longer time and cause SLI dips.
Figure 2: Correlation between control plane queue depth spikes and control plane instant SLI dips.
To resolve the head-of-line blocking, we categorize informer events and handle them with different priorities. User-triggered events have a high priority and need to be reconciled immediately, e.g., Create events triggered by users creating workloads or Update events triggered by users updating the labels of workloads. On the other hand, some system-triggered events are low priorities, e.g., a Create event during informer initialization, or an Update event during informer periodic resync. They don’t affect our SLI and are not as time-sensitive as user-triggered events. They can be delayed so they don’t pile up in the queue and block urgent events. The following section is about how to identify and delay these system-triggered events.
Create Events During Informer Initialization
Each time we update the controller, the informer initializes its List-Watch mechanism by issuing a List call to the API server. It then stores the returned results in its cache and triggers a Create event for each result. This results in a spike in the queue depth. The solution is to delay any subsequent Create events for existing objects; an object cannot be created twice by the user, and any subsequent Create events must come from informer initialization.
Figure 3: Control plane queue depth spikes to 10k during an informer initialization, resulting in a dip in control plane instant SLI.
Update Events During Informer Periodic Resync
Periodically, the informer goes over all items remaining in its cache, triggering an Update event for each item. These events are enqueued at the same time and result in a queue depth spike. As shown in Figure 2, the queue depth spike aligns with the informer periodic resync interval we configured.
Update events triggered by periodic resync are easy to identify, where the old and new objects are always the same since they both come from the informer cache. The solution is to delay Update events whose old and new objects are deep equal. The delay is randomized so that queue depth spikes can be smoothed out by scattering resync requests over a period of time.
The above optimizations solved the head-of-line blocking problem caused by inefficient worker pools. As a result, there are no longer recurring spikes in control plane queue depth. The average queue depth during informer periodic resync has been reduced by 97%, from 1k to 30. The instant SLI dips caused by the control plane queue depth spikes have been eliminated.
Figure 4: Improvement on workqueue efficiency
Challenge 2: Leadership Switch
Only the leader in the controller fleet does the actual reconciliation work, and leadership switch happens pretty often during deployments or controller pod evictions. A prolonged leadership switch can have a considerable negative impact on the control plane instant SLI.
Figure 5: Control plane leadership switches result in instant SLI dips.
Leader Election Mechanisms
There are two common leader election mechanisms for the Kubernetes control plane.
- Leader-with-lease: the leader pod periodically renews a lease and gives up leadership when it cannot renew the lease. Kubernetes native components including cluster-autoscaler, kube-controller-manager, and kube-scheduler are using leader-with-lease in client-go.
- Leader-for-life: the leader pod only gives up leadership when it is deleted and its dependent configmap is garbage collected. The configmap is used as a source of truth for leadership, so it is impossible to have two leaders at the same time (a.k.a. split brain). All resource controllers in our control plane are using the leader-for-life leader election mechanism from the operator framework to ensure we have at most one leader at a time.
In this post, we focus on the optimization of the leader-for-life approach to reduce control plane leadership switch time and improve control plane performance.
To monitor the leadership switch time, we implemented fine-grained leadership switch metrics with the following phases:
- Leaderless: when there is no leader
- Leader ramp-up: the time from a controller pod becoming leader to its first reconciliation. The new leader pod cannot begin to reconcile as soon as it becomes the leader; instead, it must wait until all relevant informers are synchronized.
Figure 6: Diagram of the leadership switch procedure
Figure 7: Control plane leadership switch monitored by the proposed leadership switch metrics
As shown in Figure 7, the control plane leadership switch usually takes more than one minute to complete, which is unacceptable for a high-performance control plane. We proposed the following solutions to reduce the leadership switch time.
Reduce Leaderless Time
The leader-for-life package hardcoded the exponential backoff interval between attempts to become a leader, starting from 1s to a maximum of 16s. When a container requires some time to initialize, it always hits the maximum of 16s. We make the backoff interval configurable and reduce it to fit our situation. We also contributed our solution back to the operator framework community.
Reduce Leader Ramp-up Time
During the leader ramp-up time, each resource informer in each cluster initiates a List call to the API server and synchronizes its cache with the returned results. The leader will only start reconciliation when all informer caches are synchronized.
Preload Informer Cache
One way to reduce the leader ramp-up time is to have standby controller pods preload their informer cache. In other words, the initialization of the informer cache is no longer exclusive to the leader but applies to every controller pod upon its creation. Note that registering event handlers is still exclusive to the leader, otherwise we will suffer from a split brain.
Use Readiness Probe to Ensure Graceful Rolling Upgrade
The informer cache preload procedure runs in the background and does not block a standby pod from becoming the leader. To enforce the blocking, we define a readiness probe by HTTP GET request to periodically check if all informer cache are synchronized. With a rolling upgrade strategy, the old leader pod is killed after the new standby pod is ready, which ensures the new pod is always warmed up when it becomes the leader.
Table 1: Improvement on control plane leadership switch monitored by the proposed leadership switch metrics (4 observations before and after)
Table 1 shows the improvement on the control plane leadership switch. The average control plane leadership switch time has been decreased from 64s to 10s, with an 85% improvement.
With these efforts, we revamp the control plane performance and redefine its SLO from 99% to 99.9%. This is a huge milestone for the Kubernetes-based compute platform, demonstrating unprecedented reliability and availability. We are working on achieving higher SLOs and have identified the following areas where the control plane performance can be further improved.
- Proactive leadership handover: The leadership handover in leader-for-life is passive because it depends on observation from external components to release resource lock. The time spent on garbage collection accounts for 50% of our current leadership handover time. Proactive leadership handover is performed by the leader when it receives SIGTERM and intentionally releases its lock before exiting. This will significantly reduce the leadership switch time.
- Reconcile Quality of Service (QoS): In this post, we present our optimization of worker pool efficiency in terms of delayed enqueue v.s. immediate enqueue. For future work, we want to introduce reconcile QoS and workqueue tiering (for example, creating different queues for different tiers of workloads to ensure that high tiers are not interfered with and blocked).
Shout out to Suli Xu and Harry Zhang for their great contributions in building a high-performance control plane to support business needs. Special thanks to June Liu, Anson Qian, Haniel Martino, Ming Zong, Quentin Miao, Robson Braga and Martin Stankard for their feedback and support.