99% to 99.9% SLO: High Performance Kubernetes Control Plane at Pinterest

1,456
Pinterest
Pinterest is a social bookmarking site where users collect and share photos of their favorite events, interests and hobbies. One of the fastest growing social networks online, Pinterest is the third-largest such network behind only Facebook and Twitter.

By Shunyao Li | Software Engineer, Cloud Runtime


Over the past three years, the Cloud Runtime team’s journey has gone from “Why Kubernetes?” to “How to scale?”. There is no doubt that Kubernetes based compute platform has achieved huge success at Pinterest. We have been supporting big data processing, machine learning, distributed training, workflow engine, CI/CD, internal tools — backing up every engineer at Pinterest.

Why Control Plane Latency Matters

As more and more business-critical workloads onboard Kubernetes, it is increasingly important to have a high-performance control plane that efficiently orchestrates every workload. Critical workloads such as content model training and ads reporting pipelines will be delayed if it takes too long to translate user workloads into Kubernetes native pods.

To measure control plane performance, we introduced top line business metrics through Service Level Indicator and Objective (SLI/SLO) in early 2021. We measure control plane SLI by reconcile latency, defined as the time from when a user change is received to when it propagates out of the control plane. For example, one of reconcile latency measurements for batch jobs is the delay between workload creation and Pod creation.

The initial SLO was set to 99%. At the time of writing this post, we are proudly serving a control plane SLO of 99.9%. This post is about how we improved the control plane to achieve high performance.

Control Plane in a Nutshell

Control plane is the nerve center of the Kubernetes platform and is responsible for workload orchestration. It listens to changes from the Kubernetes API, compares the desired state of resources with their actual status, and takes actions to make sure the actual resource status matches the desired status (reconciliation). Workload orchestration also includes making scheduling decisions about where to place workloads.

Kubernetes control plane consists of a set of resource controllers. Our resource controllers are written in the controller framework, which has an informer-reflector-cache architecture. Informers use the List-Watch mechanism to fetch and monitor resource changes from the Kubernetes API. Reflector updates cache with resource changes and dispatches events for handling. Cache stores resource objects and serve List and Get calls. The controller framework follows the producer-consumer pattern. The event handler is the producer and is responsible for queuing reconcile requests, while the controller worker pool is the consumer who pulls items from workqueue to run the reconciliation logic.

Figure 1: Kubernetes Controller Framework

Challenge 1: Worker Pool Efficiency

The controller worker pool is where the actual status to desired status reconciliation occurs. We leveraged the metrics provided by the workqueue package to gain a deep insight into the worker pool efficiency. These metrics are:

  • Work duration: how long it takes to process an item from workqueue
  • Queue duration: how long an item stays in workqueue before being processed
  • Enqueue rate: how often an item gets enqueued
  • Retry rate: how often an item gets retried
  • Queue depth: current depth of workqueue

Among these metrics, queue depth draws our attention as its spikes highly correlate with control plane performance degradation. Spikes in queue depth indicate head-of-line blocking. This usually happens when a large number of irrelevant items are enqueued in a short period of time. For those items that really need to be reconciled, they end up waiting in the queue for a longer time and cause SLI dips.

Figure 2: Correlation between control plane queue depth spikes and control plane instant SLI dips.

To resolve the head-of-line blocking, we categorize informer events and handle them with different priorities. User-triggered events have a high priority and need to be reconciled immediately, e.g., Create events triggered by users creating workloads or Update events triggered by users updating the labels of workloads. On the other hand, some system-triggered events are low priorities, e.g., a Create event during informer initialization, or an Update event during informer periodic resync. They don’t affect our SLI and are not as time-sensitive as user-triggered events. They can be delayed so they don’t pile up in the queue and block urgent events. The following section is about how to identify and delay these system-triggered events.

Create Events During Informer Initialization

Each time we update the controller, the informer initializes its List-Watch mechanism by issuing a List call to the API server. It then stores the returned results in its cache and triggers a Create event for each result. This results in a spike in the queue depth. The solution is to delay any subsequent Create events for existing objects; an object cannot be created twice by the user, and any subsequent Create events must come from informer initialization.

Figure 3: Control plane queue depth spikes to 10k during an informer initialization, resulting in a dip in control plane instant SLI.

Update Events During Informer Periodic Resync

Periodically, the informer goes over all items remaining in its cache, triggering an Update event for each item. These events are enqueued at the same time and result in a queue depth spike. As shown in Figure 2, the queue depth spike aligns with the informer periodic resync interval we configured.

Update events triggered by periodic resync are easy to identify, where the old and new objects are always the same since they both come from the informer cache. The solution is to delay Update events whose old and new objects are deep equal. The delay is randomized so that queue depth spikes can be smoothed out by scattering resync requests over a period of time.

Result

The above optimizations solved the head-of-line blocking problem caused by inefficient worker pools. As a result, there are no longer recurring spikes in control plane queue depth. The average queue depth during informer periodic resync has been reduced by 97%, from 1k to 30. The instant SLI dips caused by the control plane queue depth spikes have been eliminated.

Figure 4: Improvement on workqueue efficiency

Challenge 2: Leadership Switch

Only the leader in the controller fleet does the actual reconciliation work, and leadership switch happens pretty often during deployments or controller pod evictions. A prolonged leadership switch can have a considerable negative impact on the control plane instant SLI.

Figure 5: Control plane leadership switches result in instant SLI dips.

Leader Election Mechanisms

There are two common leader election mechanisms for the Kubernetes control plane.

  • Leader-with-lease: the leader pod periodically renews a lease and gives up leadership when it cannot renew the lease. Kubernetes native components including cluster-autoscaler, kube-controller-manager, and kube-scheduler are using leader-with-lease in client-go.
  • Leader-for-life: the leader pod only gives up leadership when it is deleted and its dependent configmap is garbage collected. The configmap is used as a source of truth for leadership, so it is impossible to have two leaders at the same time (a.k.a. split brain). All resource controllers in our control plane are using the leader-for-life leader election mechanism from the operator framework to ensure we have at most one leader at a time.

In this post, we focus on the optimization of the leader-for-life approach to reduce control plane leadership switch time and improve control plane performance.

Monitoring

To monitor the leadership switch time, we implemented fine-grained leadership switch metrics with the following phases:

  • Leaderless: when there is no leader
  • Leader ramp-up: the time from a controller pod becoming leader to its first reconciliation. The new leader pod cannot begin to reconcile as soon as it becomes the leader; instead, it must wait until all relevant informers are synchronized.

Figure 6: Diagram of the leadership switch procedure

Figure 7: Control plane leadership switch monitored by the proposed leadership switch metrics

As shown in Figure 7, the control plane leadership switch usually takes more than one minute to complete, which is unacceptable for a high-performance control plane. We proposed the following solutions to reduce the leadership switch time.

Reduce Leaderless Time

The leader-for-life package hardcoded the exponential backoff interval between attempts to become a leader, starting from 1s to a maximum of 16s. When a container requires some time to initialize, it always hits the maximum of 16s. We make the backoff interval configurable and reduce it to fit our situation. We also contributed our solution back to the operator framework community.

Reduce Leader Ramp-up Time

During the leader ramp-up time, each resource informer in each cluster initiates a List call to the API server and synchronizes its cache with the returned results. The leader will only start reconciliation when all informer caches are synchronized.

Preload Informer Cache

One way to reduce the leader ramp-up time is to have standby controller pods preload their informer cache. In other words, the initialization of the informer cache is no longer exclusive to the leader but applies to every controller pod upon its creation. Note that registering event handlers is still exclusive to the leader, otherwise we will suffer from a split brain.

Use Readiness Probe to Ensure Graceful Rolling Upgrade

The informer cache preload procedure runs in the background and does not block a standby pod from becoming the leader. To enforce the blocking, we define a readiness probe by HTTP GET request to periodically check if all informer cache are synchronized. With a rolling upgrade strategy, the old leader pod is killed after the new standby pod is ready, which ensures the new pod is always warmed up when it becomes the leader.

Result

Table 1: Improvement on control plane leadership switch monitored by the proposed leadership switch metrics (4 observations before and after)

Table 1 shows the improvement on the control plane leadership switch. The average control plane leadership switch time has been decreased from 64s to 10s, with an 85% improvement.

What’s Next

With these efforts, we revamp the control plane performance and redefine its SLO from 99% to 99.9%. This is a huge milestone for the Kubernetes-based compute platform, demonstrating unprecedented reliability and availability. We are working on achieving higher SLOs and have identified the following areas where the control plane performance can be further improved.

  • Proactive leadership handover: The leadership handover in leader-for-life is passive because it depends on observation from external components to release resource lock. The time spent on garbage collection accounts for 50% of our current leadership handover time. Proactive leadership handover is performed by the leader when it receives SIGTERM and intentionally releases its lock before exiting. This will significantly reduce the leadership switch time.
  • Reconcile Quality of Service (QoS): In this post, we present our optimization of worker pool efficiency in terms of delayed enqueue v.s. immediate enqueue. For future work, we want to introduce reconcile QoS and workqueue tiering (for example, creating different queues for different tiers of workloads to ensure that high tiers are not interfered with and blocked).

Acknowledgement

Shout out to Suli Xu and Harry Zhang for their great contributions in building a high-performance control plane to support business needs. Special thanks to June Liu, Anson Qian, Haniel Martino, Ming Zong, Quentin Miao, Robson Braga and Martin Stankard for their feedback and support.

Pinterest
Pinterest is a social bookmarking site where users collect and share photos of their favorite events, interests and hobbies. One of the fastest growing social networks online, Pinterest is the third-largest such network behind only Facebook and Twitter.
Tools mentioned in article
Open jobs at Pinterest
Machine Learning Engineer
San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

Our new progressive work model is called PinFlex, a term that’s uniquely Pinterest to describe our flexible approach to living and working. Visit our PinFlex landing page to learn more. 

With more than 400 million users around the world and 300 billion ideas saved, Pinterest Machine Learning engineers build personalized experiences to help Pinners create a life they love. With just over 3,000 global employees, our teams are small, mighty, and still growing. At Pinterest, you’ll experience hands-on access to an incredible vault of data and contribute large-scale recommendation systems in ways you won’t find anywhere else.

What you’ll do:

  • Build cutting edge technology using the latest advances in deep learning and machine learning to personalize Pinterest
  • Partner closely with teams across Pinterest to experiment and improve ML models for various product surfaces (Homefeed, Ads, Growth, Shopping, and Search), while gaining knowledge of how ML works in different areas
  • Use data driven methods and leverage the unique properties of our data to improve candidates retrieval
  • Work in a high-impact environment with quick experimentation and product launches
  • Keeping up with industry trends in recommendation systems 

 

What we’re looking for:

  • 2+ years of industry experience applying machine learning methods (e.g., user modeling, personalization, recommender systems, search, ranking, natural language processing, reinforcement learning, and graph representation learning)
  • End-to-end hands-on experience with building data processing pipelines, large scale machine learning systems, and big data technologies (e.g., Hadoop/Spark)
  • Nice to have:
    • M.S. or PhD in Machine Learning or related areas
    • Publications at top ML conferences
    • Expertise in scalable realtime systems that process stream data
    • Passion for applied ML and the Pinterest product

 

#LI-HYBRID
#LI-LA1

Our Commitment to Diversity:

At Pinterest, our mission is to bring everyone the inspiration to create a life they love—and that includes our employees. We’re taking on the most exciting challenges of our working lives, and we succeed with a team that represents an inclusive and diverse set of identities and backgrounds.

iOS Engineer, Product
San Francisco, CA, US; New York City, NY, US; Portland, OR, US; Seattle, WA, US

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

Our new progressive work model is called PinFlex, a term that’s uniquely Pinterest to describe our flexible approach to living and working. Visit our PinFlex landing page to learn more. 

We are looking for inquisitive, well-rounded iOS engineers to join our Product engineering teams. Working closely with product managers, designers, and backend engineers, you’ll play an important role in enabling the newest technologies and experiences. You will build robust frameworks & features. You will empower both developers and Pinners alike. You’ll have the opportunity to find creative solutions to thought-provoking problems. Even better, because we covet the kind of courageous thinking that’s required in order for big bets and smart risks to pay off, you’ll be invited to create and drive new initiatives, seeing them from inception through to technical design, implementation, and release.

What you’ll do:

  • Build out Pinner-facing frontend features in iOS to power the future of inspiration on Pinterest
  • Contribute to and lead each step of the product development process, from ideation to implementation to release; from rapidly prototyping, running A/B tests, to architecting and building solutions that can scale to support millions of users
  • Partner with design, product, and backend teams to build end to end functionality
  • Put on your Pinner hat to suggest new product ideas and features
  • Employ automated testing to build features with a high degree of technical quality, taking responsibility for the components and features you develop
  • Grow as an engineer by working with world-class peers on varied and high impact projects

What we’re looking for:

  • Deep understanding of iOS development and best practices in Objective C and/or Swift, e.g. xCode, app states, memory management, etc
  • 2+ years of industry iOS application development experience, building consumer or business facing products
  • Experience in following best practices in writing reliable and maintainable code that may be used by many other engineers
  • Ability to keep up-to-date with new technologies to understand what should be incorporated
  • Strong collaboration and communication skills

Product iOS Engineering teams: 

Creator Incentives 

Home Product

Native Publishing

Search Product

Social Growth

Our Commitment to Diversity:

At Pinterest, our mission is to bring everyone the inspiration to create a life they love—and that includes our employees. We’re taking on the most exciting challenges of our working lives, and we succeed with a team that represents an inclusive and diverse set of identities and backgrounds.

Machine Learning Engineer, Core Engi...
San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

Our new progressive work model is called PinFlex, a term that’s uniquely Pinterest to describe our flexible approach to living and working. Visit our PinFlex landing page to learn more. 

With more than 400 million users around the world and 300 billion ideas saved, Pinterest Machine Learning engineers build personalized experiences to help Pinners create a life they love. With just over 3,000 global employees, our teams are small, mighty, and still growing. At Pinterest, you’ll experience hands-on access to an incredible vault of data and contribute large-scale recommendation systems in ways you won’t find anywhere else.

What you’ll do:

  • Build cutting edge technology using the latest advances in deep learning and machine learning to personalize Pinterest
  • Partner closely with teams across Pinterest to experiment and improve ML models for various product surfaces (Homefeed, Ads, Growth, Shopping, and Search), while gaining knowledge of how ML works in different areas
  • Use data driven methods and leverage the unique properties of our data to improve candidates retrieval
  • Work in a high-impact environment with quick experimentation and product launches
  • Keeping up with industry trends in recommendation systems 

 

What we’re looking for:

  • 2+ years of industry experience applying machine learning methods (e.g., user modeling, personalization, recommender systems, search, ranking, natural language processing, reinforcement learning, and graph representation learning)
  • End-to-end hands-on experience with building data processing pipelines, large scale machine learning systems, and big data technologies (e.g., Hadoop/Spark)
  • Nice to have:
    • M.S. or PhD in Machine Learning or related areas
    • Publications at top ML conferences
    • Expertise in scalable realtime systems that process stream data
    • Passion for applied ML and the Pinterest product

 

#LI-HYBRID
#LI-LA1

Our Commitment to Diversity:

At Pinterest, our mission is to bring everyone the inspiration to create a life they love—and that includes our employees. We’re taking on the most exciting challenges of our working lives, and we succeed with a team that represents an inclusive and diverse set of identities and backgrounds.

Software Engineer, Infrastructure
San Francisco, CA, US; Palo Alto, CA, US; Seattle, WA, US

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

Our new progressive work model is called PinFlex, a term that’s uniquely Pinterest to describe our flexible approach to living and working. Visit our PinFlex landing page to learn more. 

The Pinterest Infrastructure Engineering organization builds, scales, and evolves the systems which the rest of Pinterest Engineering uses to deliver inspiration to the world.  This includes source code management, continuous integration, artifact packaging, continuous deployment, service traffic management, service registration and discovery, as well as holistic observability and the underlying compute runtime and container orchestration.  A collection of platforms and capabilities which accelerate development velocity while protecting Pinterest’s production availability for one of the world’s largest public cloud workloads. 

What you’ll do:

  • Design, develop, and operate large scale, distributed systems and networks
  • Work with Engineering customers to understand new requirements and address them in a scalable and efficient manner
  • Actively work to improve the developer process and experience in all phases from coding to operation

What we’re looking for:

  • 2+ years of industry software engineering experience
  • Experience building & operating large scale distributed systems and/or networks
  • Experience in Python, Java, C++, or Go or another language and a willingness to learn
  • Bonus: Experience deploying and operating large scale workloads on a public cloud footprint

Available Hiring Teams: Cloud Delivery Platform (Infra Eng), Code & Language Runtime (Infra Eng), Traffic (Infra Eng), Cloud Systems (Infra Eng), Online Systems (Data Eng), Key Value Systems (Data Eng), Real Time Analytics (Data Eng), Storage & Caching (Data Eng), ML Serving Platform (Data Eng)

 

#LI-SG1

Our Commitment to Diversity:

At Pinterest, our mission is to bring everyone the inspiration to create a life they love—and that includes our employees. We’re taking on the most exciting challenges of our working lives, and we succeed with a team that represents an inclusive and diverse set of identities and backgrounds.

Verified by
Software Engineer
Sourcer
Software Engineer
Talent Brand Manager
Tech Lead, Big Data Platform
Security Software Engineer
You may also like