John Egan
jwegan
Pinterest
45 points

Following

  • GitHub

    #<User:0x00007ff5842f1f58> Deploying software at Pinterest


    Github Enterprise is our version-control overlay, managing code-reviews and facilitates code-merging, and has a great API.

  • Apache Storm

    #<User:0x00007ff5842f0798> Scalable A/B experiments at Pinterest


    In addition to batch processing, we also wanted to achieve real-time data processing. For example, to improve the success rate of experiments, we needed to figure out experiment group allocations in real-time once the experiment configuration was pushed out to production. We used Storm to tail Kafka and compute aggregated metrics in real-time to provide crucial stats.

  • HBase

    #<User:0x00007ff57f0517d0> Scalable A/B experiments at Pinterest


    The final output is inserted into HBase to serve the experiment dashboard. We also load the output data to Redshift for ad-hoc analysis. For real-time experiment data processing, we use Storm to tail Kafka and process data in real-time and insert metrics into MySQL, so we could identify group allocation problems and send out real-time alerts and metrics.

  • Kafka

    #<User:0x00007ff57f05af10> Scalable A/B experiments at Pinterest


    http://media.tumblr.com/d319bd2624d20c8a81f77127d3c878d0/tumblr_inline_nanyv6GCKl1s1gqll.png

    Front-end messages are logged to Kafka by our API and application servers. We have batch processing (on the middle-left) and real-time processing (on the middle-right) pipelines to process the experiment data. For batch processing, after daily raw log get to s3, we start our nightly experiment workflow to figure out experiment users groups and experiment metrics. We use our in-house workflow management system Pinball to manage the dependencies of all these MapReduce jobs.

  • Varnish

    #<User:0x00007ff57f059c50> Pinterest


    When you visit the site, you talk to a load balancer which chooses a varnish front-end which in turn talks to our web front-ends which used to run nine python processes. Each of these processes are serving the exact same version on any given web front-end.

  • Hadoop

    #<User:0x00007ff57f058710> Pinterest


    The massive volume of discovery data that powers Pinterest and enables people to save Pins, create boards and follow other users, is generated through daily Hadoop jobs...

  • Zookeeper

    #<User:0x00007ff57f0571d0> Pinterest


    Like many large scale web sites, Pinterest’s infrastructure consists of servers that communicate with backend services composed of a number of individual servers for managing load and fault tolerance. Ideally, we’d like the configuration to reflect only the active hosts, so clients don’t need to deal with bad hosts as often. ZooKeeper provides a well known pattern to solve this problem.

  • Qubole

    #<User:0x00007ff57f055dd0> Pinterest


    We ultimately migrated our Hadoop jobs to Qubole, a rising player in the Hadoop as a Service space. Given that EMR had become unstable at our scale, we had to quickly move to a provider that played well with AWS (specifically, spot instances) and S3. Qubole supported AWS/S3 and was relatively easy to get started on. After vetting Qubole and comparing its performance against alternatives (including managed clusters), we decided to go with Qubole

  • Amazon S3

    #<User:0x00007ff57f054890> Pinterest


    We currently log 20 terabytes of new data each day, and have around 10 petabytes of data in S3.

  • Amazon S3

    #<User:0x00007ff57f053c10> Deploying software at Pinterest


    Amazon S3 is where we keep our builds. It’s a simple way to share data and scales with no intervention on our end.

  • Zookeeper

    #<User:0x00007ff57f0530d0> Deploying software at Pinterest


    Zookeeper manages our state, and tells each node what version of code it should be running.

  • Jenkins

    #<User:0x00007ff57f052090> Deploying software at Pinterest


    Jenkins is our continuous integration system for packaging builds and running unit tests after each check in.

  • Hadoop

    #<User:0x00007ff57f051410> Scalable A/B experiments at Pinterest


    The MapReduce workflow starts to process experiment data nightly when data of the previous day is copied over from Kafka. At this time, all the raw log requests are transformed into meaningful experiment results and in-depth analysis. To populate experiment data for the dashboard, we have around 50 jobs running to do all the calculations and transforms of data.