Scaling Clearbit to 2M API Requests Per Day

5,952
Clearbit
APIs for determining who's behind an email address

By Harlow Ward, ‎Developer and Co-founder at Clearbit.


Clearbit builds Business Intelligence APIs - Our suite of APIs are focused on Lead Enrichment and Automated Research.

Clearbit lookup example

Our goal is to help modern businesses make better data-driven decisions. Our platform aggregates data from hundreds of public sources and packages it up into beautifully hand-crafted JSON payloads.

Customers use our APIs to:

  • Give their sales team more information on customers, leads, and prospects.
  • Integrate and surface person/company data to the end-users of their systems.
  • Underwrite transactions and reduce fraud.

Outside of our paid products we also love releasing free products. These bite sized APIs are hyper focused on helping designers and developers enhance the user-experience of their tools and systems.

A few of these freebies include:


Engineering at Clearbit

Our engineering team consists of three developers: Alex MacCaw (also our fearless CEO), Rob Holland, and myself.

We are a small dev team, and that means we all wear a lot of hats. Day-to-day, it’s not uncommon to jump between Frontend HTML/JS/CSS, API design, Service administration, DB administration, Infrastructure management, and of course a little customer support.


Services Everywhere

We made the decision early on to build a microservice-first architecture. This means our system is composed of lots of tiny Single Responsibility Services (SRS anyone?).

In general these services are written in Ruby, leverage Sinatra to expose JSON endpoints, and use RSpec to verify accuracy. Each service maintains its own datastore; depending on the service's needs we’ll typically choose from Amazon RDS, Amazon DynamoDB, or hosted Elasticsearch with Found.

There are some great arguments to be made about a MonolithFirst architecture. However, in our case, we felt our data boundaries were reasonably clear from the beginning, and this allowed us to make a few low-risk bets around building and running a microservice-first architecture. So far so good!

Our web services fall into two categories:

  1. External (publicly accessible, authenticated via API keys).
  2. Internal (accessible within VPC, locked down to specific security groups).

At any given time we’re running 70+ different internal services across a cluster of 18 machines. Our external (customer facing) APIs are serving upwards of 2 million requests per-day, and that number is rapidly increasing.


Early Days

When working with a microservice architecture it's difficult to overstate how important it is for a developer to be able to quickly push a new web service.

Our initial aritecture was built on Amazon EC2 and leveraged dokku-alt (a Docker powered mini-Heroku) to manage deployments.

Dokku-alt covered our basic requirements:

  • Git based deploys.
  • Managing ENV vars outside of config files.
  • Ability to rollback in case of emergency.

However, as the number of servers grew some shortcomings of dokku-alt began to emerge. This was no fault of dokku-alt; we were just outgrowing our architecture.

As we added more machines the problems compounded. The per-machine configuration management we had initially loved quickly became unsustainable. On top of that, running git push production master simultaneously to every box in the cluster made for some nerve-racking deploys.

The state of our deployment system was beginning to take a toll on the team's productivity. It was time to make a change. We collectively decided to explore our options.


Current Stack

As our infrastructure grew, our deployment requirements also evolved:

  • Distributed configuration management.
  • Git push to only one repository.
  • Blue/Green style deploys.

After looking into solutions like Deis and Flynn, we decided we'd feel happier with something with simpler semantics. We were attracted to Fleet because of it's simplicity and flexibility, and the reputation of the CoreOS team.

Co-ordinating configuration between machines became a breeze with the use of etcd. Now when our deployer app builds a new docker container we can inject environment variables from etcd directly into the container.

From there, we use Fleet to distribute the units accross our cluster of servers. We’ve found fleet-ui super handy for visualizing the distribution of units across our cluster.


fleetui


To keep our operational expenses down, we have a static pool of on-demand EC2 instances running the etcd quorum, HAProxy, and several of the HTTP front ends. On top of that, we leverage a dynamic pool of EC2 Spot Instances to handle the dynamic nature of our workloads during times of extremely high throughput.

Word to the wise: Don’t use Spot Instances as part of your etcd quorum -- When someone else bids higher than the current Spot Price (and they will), the Spot Instances will disappear without warning.


Monitoring

It’s hard to stress how important it’s been for us to have a deep and instantly available understanding of the current state of all our services.

Starting from the outside, we use Runscope to continually ping and analyze responses from our services. It’s been instrumental in verifying and maintaining the APIs with dynamic date versioning.

Digging a level deeper, we use Librato for measuring and monitoring lower level system behaviour. We’re diligent about creating alerts that will notify the team if anything seems awry.

Sentry notifies us immediatly via Slack and Email if any of our services are throwing errors. We’re big believers in the Broken windows theory, and try to keep Sentry as clean as possible.

Finally, we use SumoLogic as our log aggregation platform. We run Sumo Collectors on each of our hosts. SumoLogic is our last line of defense for spotting inconsistent system behaviour and debugging historical issues.


Looking Forward

We have a private contrib repo with a handful of rack middlewares that are shared across our services. These middlewares dramatically cut down on duplication of code around Authentication, Authorization, Rate Limiting, and IP Restrictions.

In general, the shared middleware approach has worked well for us. However, as we look to the future and the team continues to experiment with new languages, the Ruby middlewares can’t be shared across new languages in the polyglot system.

Our goal is to push this shared logic out of the services and into the proxy layer (possibly with the help of VulcanD, Kong, or some custom HAProxy foo).

If you have made a transition like this before, or have a an elegant idea of how to summersault this hurdle, I’d love to buy you a beverage. harlow@clearbit.com


Clearbit
APIs for determining who's behind an email address
Tools mentioned in article