Using Kafka to Throttle QPS on MySQL Shards in Bulk Write APIs

1,370
Pinterest
Pinterest is a social bookmarking site where users collect and share photos of their favorite events, interests and hobbies. One of the fastest growing social networks online, Pinterest is the third-largest such network behind only Facebook and Twitter.

At Pinterest, backend core services are in charge of various operations on pins, boards, and users from both Pinners and internal services. While Pinners’ operations are identified as online traffic because of real-time response, internal traffic is identified as offline because processing is asynchronous, and real-time response is not required.

The services’ read and write APIs are shared between traffic of these cases. The majority of Pinners’ operations on a single object (such as creating a board, saving a Pin, or editing user settings through web or mobile) are routed to one of the APIs to fetch and update data in datastores. Meanwhile, internal services use these APIs to take actions on a large number of objects on behalf of users (such as deactivating spam accounts, removing spam Pins).

To offload internal offline traffic from APIs so online traffic can be handled exclusively with better reliability and performance, write APIs should support batch objects. A bulk write platform on top of Kafka is proposed and implemented. This also ensures internal services like QPS are supported more efficiently, without being restricted to guarantee high throughput. In this post, we’ll cover the characteristics of internal offline traffic, the challenges we faced and how we attacked them by building a bulk write platform in backend core services.

Datastores and write APIs

At Pinterest, MySQL is one major datastore to store content created by users. To store billions of Pins, boards and other data for hundreds of millions of Pinners, many MySQL database instances form a MySQL cluster, which is split into logical shards to manage and serve the data more efficiently. All data are split across on these shards.

To read and write data efficiently for one user, the data is stored in the same shard so that APIs only need to fetch data from one shard without fan-out queries to various shards. To prevent any single request from occupying MySQL database resource for a long time, every single query is configured with timeout.

All write APIs of core services were built for online traffic from Pinners at the beginning and work well as only a single object is accepted because pinner operates on a single object most of the time) and the operation is lightweight. Even when Pinners would take bulk operation, e.g. move a number of Pins to a section one board, the performance is still good because the number of objects isn’t very big and write APIs can handle them one by one.

Challenges

The situation changes as more and more internal services use existing write APIs for various bulk operations (such as removing many Pins for a spam user within a short period of time or backfilling a new field for a huge number of existing Pins). As write APIs can only handle one object at a time, much higher traffic with spikes is seen in these APIs.

To handle more traffic, autoscaling of the services can be applied but does not necessarily solve the problem completely because the capacity of the system is restricted by the MySQL cluster. With the existing architecture of MySQL cluster, it’s hard to do autoscaling of MySQL cluster.

To protect the services and MySQL cluster, rate limiting is applied to write APIs.

Although throttling can help to some extent, it has several drawbacks that prevent backend core services from being more reliable and scalable.

  1. Both online and offline traffic to an API affect each other. If the spike of internal offline traffic happens, online traffic to the same API is affected with higher latency and downgraded performance, which impacts the user’s experience.
  2. As more and more internal traffic is sent to the API, rate limiting needs to keep bumping carefully so that APIs can serve more traffic without affecting existing traffic.
  3. Rate limiting does not stop hot shards. When internal services write data for a specific user, e.g. ingest a large number of feed pins for a partner, all requests are targeting the same shard. The hot shard is expected because of spike of requests in a short period of time. The situation gets worse when update operations in MySQL are expensive.

As internal services need to handle a big number of objects within a short period of time and do not need a real-time response, requests that target to the same shard can be combined together and handled asynchronously with one shared query to MySQL to improve efficiency and save bandwidth of connection resource of MySQL. All combined batch requests should be processed at a controlled rate to avoid hot shards.

Bulk write architecture

The bulk write platform was architectured to support high QPS for internal services with high throughput and zero hot shards. Also, migrating to the platform should be straightforward by simply calling new APIs.

Bulk write APIs and Proxy

To support write (update, delete and create) operation on a batch of objects, a set of bulk write APIs are provided for internal service, which can accept a list of objects instead of a single one object. This helps reduce QPS dramatically to the APIs compared to regular write APIs.

Proxy is a finagle service that maps incoming requests to different batching modules, which combine requests to the same shard together, according to the type of objects.

Batching Module

Batching module is to split a batch request into small batches based on the operation type and object type so one batch of objects can be processed efficiently in MySQL, which has timeout configured for each query.

This was designed for two major considerations:

  • Firstly, write rate to every shard should be configured to avoid hot shards as shards may contain different numbers of records and perform variously. One batch request from proxy contains objects on different shards. To control QPS accurately at shards, the batch request is splitting into batches based on targeting shards. ‘Shard Batching’ module splits requests by affected MySQL shards
  • Secondly, each write operation has its own batch size. The operations on different object types have different performance because they update a different number of various tables. For instance, creating a new Pin may change four to five different tables, meanwhile updating an existing Pin may change two tables only. Also, an update query to tables may take various lengths of time. Thus, a batch update for one object type may experience various latencies for different batch sizes. To make batch update efficient, the batch size is configured differently for various write operations. ‘Operation Batching’ further splits these requests by types of operation.

Rate Limiter with Kafka

All objects in a batch request from the batching module are on the same shard. Hot shard is expected if too many requests are hitting one specific shard. Hot shard affects all other queries to the same shard and downgrades the performance of the system. To avoid the issue, all requests to one shard should be sent at a controlled rate thus the shard will not be overwhelmed and can handle requests efficiently. To achieve this goal, one ratelimiter needed for every shard and it controls all requests of the shard.

To support high QPS from internal clients at the same time, all requests from them should be stored temporarily in the platform and processed at a controlled speed. This is where Kafka makes a good fit for these purposes.

  1. Kafka can handle very high qps write and read.
  2. Kafka is a reliable distributed message storage system to buffer batch requests so that requests are processed at a controlled rate.
  3. Kafka can leverage the re-balancing of load and manage consumers automatically.
  4. Each partition is assigned to one consumer exclusively (in the same consumer group) and the consumer can process requests with good rate-limiting.
  5. Requests in all partitions are processed by different consumer processors simultaneously so that throughput is very high.

P: partition C: consumer processor

Kafka Configuration

Firstly, each shard in MySQL cluster has a matching partition in Kafka so that all requests to that shard will be published to the corresponding partition and processed by one dedicated consumer processor with accurate QPS. Secondly, a large number of consumer processors are running so that one or two partitions at maximum are assigned to one consumer processor to achieve maximum throughput.

Consumer Processor

The Consumer processor does rate-limiting of QPS on a shard with two steps:

  • Firstly, how many requests that a consumer can pull from its partition at a time is configured.
  • Secondly, consumer consults with the configuration for shards to get the precise number of batch requests that one shard can handle and uses Guava Ratelimiter to do rate control. For instance, for some shards, it may handle low traffic because hot users are stored in that shards.

Consumer processors can handle different failures with appropriate actions. To handle congestion in the threadpool, the consumer processor will retry the task with configured back off time if threadpool is full and busy with existing tasks. To handle failures in MySQL shards, it will check the response from MySQL cluster to catch errors and exceptions and take appropriate action on different failures. For instance, when it sees two consecutive failures of a timeout, it will send alerts to system admin and will stop pulling and processing requests with a configured wait time. With these mechanisms, the success rate of request processing is high.

Results

Several use cases of internal teams have been launched to bulk write platform with good performance. For instance, feed ingestion for partners is using the platform. Many improvements are observed in both the time spent and the success rate of the process. The result of ingesting around 4.3 million Pins is shown as follows.

Also, the hot shard is not seen during feed ingestion any more, which has caused a lot of similar issues before.

What’s next

As more internal traffic is separated from existing write APIs to new bulk write APIs, the performance of APIs for online traffic sees improvement, like less downtime, lower latency. This helps make systems more reliable and efficient.

The next step for the new platform is to support more cases by extending existing operations on more object types.

Acknowledgments

Thanks to Kapil Bajaj, Carlo De Guzman, Zhihuang Chen and the rest of the Core Services team at Pinterest! Also special thanks to Brian Pin, Sam Meder from the Shopping Infra team for providing support.

Pinterest
Pinterest is a social bookmarking site where users collect and share photos of their favorite events, interests and hobbies. One of the fastest growing social networks online, Pinterest is the third-largest such network behind only Facebook and Twitter.
Tools mentioned in article
Open jobs at Pinterest
Android Engineer, Client Excellence
Mexico City, MEX

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

On the Client Excellence team you ensure Pinners have a high quality experience on Pinterest. You do this by improving our critical client metrics like crash-free users and by upgrading our supported libraries and operating systems. You also partner with other engineering teams to improve the developer experience and champion operational excellence.

What you’ll do:

  • Improve the quality of our apps by monitoring and improving core client metrics e.g. crash-free user rate, app size, memory management and cpu usage
  • Drive library and OS upgrades with minimal disruption across Pinterest
  • Partner with other engineering teams to improve client developer experience
  • Champion operational excellence across all client engineering teams

What we’re looking for:

  • Deep understanding of Android development and best practices in Java or Kotlin
  • Knowledge on multi-threading, logging, memory management, caching and builds on Android
  • Expertise in developing and debugging across a diverse service stack including storage and data solutions
  • Demonstrated track record of improving software quality with stable releases
  • Experience on platform teams/initiatives, driving technology adoption across feature teams
  • Keeps up to date with new technologies to understand what should be incorporated 
  • Strong collaboration and communication skills
Backend Engineer, Discovery Measurements
Mexico City, MEX

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

Pinterest personalizes millions of experiences by using machine learning algorithms to sift through our catalog of one hundred billion Pins to find the best content for each Pinner. It is critical to measure the users experience across Pinterest and identify opportunities for improvement. The Discovery Measurements team’s charter is to establish human-powered ground truth for major Pinterest products, e.g. Search and Ads, and develop company critical measurements about relevance, domain quality, session experience, retention, etc. As we look to scale these platforms both vertically and horizontally, we’re looking for strong software engineers to join the team to drive technical excellence and curiosity. We need someone who has experience as a backend developer as well as drive to dive into challenging data processing and data mining problems.

What you’ll do:

  • Build a platform that enables teams to evaluate and train their ML models
  • Design and scale company-wide online & offline measurement platforms for organic and ad content
  • Design and develop company critical measurements, including relevance, domain quality, session experience, retention, user satisfaction
  • Establish technical foundation to generate insightful signals about Pin and Pinners that could power other ML models in the Pinterest ecosystem
  • Partner with cross-functional stakeholders to align engineering efforts for high impact technical initiatives

What we’re looking for:

  • Fluent in any of the following languages: C/C++, Java, JavaScript, Python
  • Exposure to architectural patterns of a large, high-scale web application (e.g., well-designed APIs, high volume data pipelines, efficient algorithms)
  • Model of software engineering best practices, including agile development, unit testing, code reviews, design documentation, debugging, and problem solving
  • Familiar with large data processing and measurement
  • Curiosity for leveraging data and metrics to identify challenging opportunities and build impactful solutions
Engineering Manager, Client Excellence
Mexico City, MEX

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

We’re looking for an Engineering Manager to build out the Client Excellence team. This team of Android, iOS, Web and API engineers is responsible for ensuring Pinners have a high quality experience on Pinterest. They do this by creating tools to monitor and improve our critical client metrics like crash-free sessions, keeping our critical libraries up to date and partnering with other engineering teams to champion operational excellence.

What you’ll do:

  • Build out an experienced team of Android/iOS/Web/API engineers and help them develop new skills and advance in their careers
  • Provide a vision to the team, drive technical excellence and partner with key stakeholders to prioritize and deliver on the team's roadmap
  • Improve the quality of our apps by monitoring and improving core client metrics e.g. crash-free user rate, app size, memory management and cpu usage
  • Create an operational strategy to drive library and OS upgrades with minimal disruption across Pinterest
  • Partner with other engineering teams to discover future opportunities to improve client developer experience
  • Champion operational excellence across all client engineering teams

What we’re looking for:

  • Strong communication, people development and software project management skills
  • Ability to deliver on immediate goals and form long-term strategies around technology, processes, and people
  • Demonstrated track record of improving software quality with stable releases
  • Ability to dive deeply into platform metrics (e.g. crash rates, logging) to identify opportunities for focus
  • Experience leading platform teams/initiatives, driving technology adoption across feature teams
Fullstack Engineer, Discovery Measure...
Mexico City, MEX

About Pinterest:  

Millions of people across the world come to Pinterest to find new ideas every day. It’s where they get inspiration, dream about new possibilities and plan for what matters most. Our mission is to help those people find their inspiration and create a life they love. In your role, you’ll be challenged to take on work that upholds this mission and pushes Pinterest forward. You’ll grow as a person and leader in your field, all the while helping Pinners make their lives better in the positive corner of the internet.

Pinterest personalizes millions of experiences by using machine learning algorithms to sift through our catalog of one hundred billion Pins to find the best content for each Pinner. It is critical to measure the users experience across Pinterest and identify opportunities for improvement. The Discovery Measurements team’s charter is to establish human-powered ground truth for major Pinterest products, e.g. Search and Ads, and develop company critical measurements about relevance, domain quality, session experience, retention, and more. As we look to scale these platforms both vertically and horizontally, we’re looking for strong software engineers to join the team to drive technical excellence and curiosity. We need someone who has experience as a full-stack engineer to dive into challenging human-in-the-loop AI problems.

What you’ll do:

  • You will start by building human-in-the-loop AI platforms to power ML models on production
  • Design and implement the UI layer by closely working with Data Scientist, Product Managers, and Machine Learning engineers
  • Contribute to the new unified human computation backend service
  • Build the scalable backend API infrastructure which can be used to measure and evaluate all various deep learning and machine learning models on production

What we’re looking for:

  • Mastery in frontend stack (Javascript/HTML/CSS), familiarity with modern frontend frameworks (e.g. React/Redux)
  • Knowledge of backend stack (Java, Python, Go) and how they interact with MySQL, Redis, Kafka, etc.
  • Good judgment about shipping improvement quickly while ensuring the sustainability of platforms
  • Ability to measure and improve large scale platforms
Verified by
Security Software Engineer
Tech Lead, Big Data Platform
Software Engineer
Talent Brand Manager
Sourcer
Software Engineer
You may also like