How Algolia Reduces Latency For 21B Searches Per Month

20,215
Algolia
Developer-friendly hosted search service. API clients for all major frameworks and languages. REST, JSON & detailed documentation.

By Josh Dzielak, Developer Advocate at Algolia.


Algolia Paris meeting room


Algolia helps developers build search. At the core of Algolia is a built-from-scratch search engine exposed via a JSON API. In February 2017, we processed 21 billion queries and 27 billion indexing operations for 8,000+ live integrations. Some more numbers:

  • Query volume: 1B/day peak, 750M/day average (13K/s during peak hours)
  • Indexing operations: 10B/day peak, 1B/day average (spikes can be over 1M/s)
  • Number of API servers: 800+
  • Total memory in production: 64TB
  • Total I/O per day: 3.9PB
  • Total SSD storage capacity: 566TB

We’ve written about our stack before and are big fans of StackShare and the community here. In this post we‘ll look at how our stack is designed from the ground up to reduce latency and the tools we use to monitor latency in production.

I’m Josh and I’m a Developer Advocate at Algolia, formerly the VP Engineering at Keen IO. Being a developer advocate is pretty cool. I get to code, write and speak. I also get to converse daily with developers using Algolia.

Frequently, I get asked what Algolia’s API tech stack looks like. Many people are surprised when I tell them:

  1. The Algolia search engine is written in C++ and runs inside of nginx. All searches start and finish inside of our nginx module.

  2. API clients connect directly to the nginx host where the search happens. There are no load balancers or network hops.

  3. Algolia runs on hand-picked bare metal. We use high-frequency CPUs like the 3.9Ghz Intel Xeon E5–1650v4 and load machines with 256GB of RAM.

  4. Algolia uses a hybrid-tenancy model. Some clusters are shared between customers and some are dedicated, so we can use hardware efficiently while providing full isolation to customers who need it.

  5. Algolia doesn’t use AWS or any cloud-based hosting for the API. We have our own servers spanning 47 datacenters in 15 global regions.


Algolia architecture diagram


Why this infrastructure?

The primary design goal for our stack is to aggressively reduce latency. For the kinds of searches that Algolia powers—suited to demanding consumers who are used to Google, Amazon and Facebook—latency is a UX killer. Search-as-you-type experiences, which have become the norm since Google announced instant search in 2011, have demanding requirements. Any more than 100ms from end-to-end can be perceived as sluggish, glitchy and distracting. But at 50ms or less the experience feels magical. We prefer magic.

Monitoring

Our monitoring stack helps us keep an eye on latency across all of our clusters. We use Wavefront to collect metrics from every machine. We like Wavefront because it’s simple to integrate (we have it plugged in to StatsD and collectd), provides good dashboards, and has integrated alerting.

We use PagerDuty to fire alerts for abnormalities like CPU depletion, resource exhaustion and long-running indexing jobs. For non-urgent alerts, like single process crashes, we dump and collect the core for further investigation. If the same non-urgent alert repeats more than a set number of times, we do trigger a PagerDuty alert. We keep only the last 5 core dumps to avoid filling up the disk.

When a query takes more than 1 second we send an alert into Slack. From there, someone on our Core Engineering Squad will investigate. On a typical day, we might see as few as 1 or even 0 of these, so Slack has been a good fit.

Probes

We have probes in 45 locations around the world to measure the latency and the availability of our production clusters. We host the probes with 12 different providers, not necessarily the same as where our API servers are. The results from these probes are publicly visible at status.algolia.com. We use a custom internal API to aggregate the large amount of data that probes fetch from each cluster and turn it into a single value per region.


Algolia probes


Downed Machines

Downed machines are detected within 30 seconds by a custom Ruby application. Once a machine is detected to be down, we push a DNS change to take it out of the cluster. The upper bound of propagation for that change is 2 minutes (DNS TTL). During this time, API clients implement their internal retry strategy to connect to healthy machines in the cluster, so there is no customer impact.

Debugging Slow Queries

When a query takes abnormally long - more than 1 second - we dump everything about it to a file. We keep everything we need to rerun it including the application ID, index name and all query parameters. High-level profiling information is also stored - with it, we can figure out where time is spent in the heaviest 10% of query processing. A syscall called getrusage analyzes resource utilization of the calling process and its children.

For the kernel, we record the number of major page faults (ru_majflt), number of block inputs, number of context switches, elapsed wall clock time (using gettimeofday, so that we don’t skip counting time on a blocking I/O like a major page fault since we’re using memory mapped files) and a variety of other statistics that help us determine the root cause.

With data in hand, the investigation proceeds in this order:

  1. The hardware
  2. The software
  3. Operating system and production environment

Hardware

The easiest problem to detect is a hardware issue. We see burned SSDs, broken memory modules and overheated CPUs. We automate the reporting of the most common failures like SSDs by alerting on S.M.A.R.T. data. For infrequent errors, we might need to run a suite of specific tools to narrow down the root cause, like mbw for uncovering memory bandwidth issues. And of course, there is always syslog which logs most hardware failures.

Individual machine failures will not have a customer impact because each cluster has 3 machines. Where it’s possible in a given geographical region, each machine is located in a different datacenter and attached to a different network provider. This provides further insulation from network or datacenter loss.

Software

We have some close-to-zero cost profiling information obtained from the getrusage syscall. Sometimes that’s enough to diagnose an issue with the engine code. If not, we need to look to profiling. We can’t run a profiler in production for performance reasons, but we can do this after the fact.

An external binary is attached to a profiler, containing exactly the same code as the module running inside of nginx. The profiler uses information obtained by google-perftools, a very accurate stack-sampling profiler, to simulate the exact conditions of the production machine.

OS / Environment

If we can rule out hardware and software failure, the problem might have been with the operating environment at that point in time. That means analyzing system-wide data in the hope of discovering an anomaly.

Once we discovered that defragmentation of huge pages in the kernel could block our process for several hundred milliseconds. This defragmentation isn’t necessary because we keep large memory pools like nginx. Now we make sure it doesn’t happen, to the benefit of more consistent latency for all of our customers.

Deployment

Every Algolia application runs on a cluster of 3 machines for redundancy and increased throughput. Each indexing operation is replicated across the machines using a durable queue.

Clusters can be mirrored to other global regions across Algolia’s Distributed Search Network (DSN). Global coverage is critical for delivering low latency to users coming from different continents. You can think of DSN like a CDN without caching - every query is running against a live, up-to-date copy of the index.

Early Detection

When we release a new version of the code that powers the API, we do it in an incremental, cluster-aware way so we can rollback immediately if something goes wrong.

Automated by a set of custom deployment scripts, the order of the rolling deploy looks like this:

  • Testing machines
  • Staging machines
  • ⅓ of production machines
  • Another ⅓ of production machines
  • The final ⅓ of production machines

First, we test the new code with unit tests and functional tests on a host that with an exact production configuration. During the API deployment process we use a custom set of scripts to run the tests, but in other areas of our stack we’re using Travis CI.

One thing we guard against is a network issue that produces a split-brain partition during a rolling deployment. Our deployment strategy considers every new version as unstable until it has consensus from every server, and it will continue to retry the deploy until the network partition heals.

Before deployment begins, another process has encrypted our binaries and uploaded them to an S3 bucket. The S3 bucket sits behind CloudFlare to make downloading the binaries fast from anywhere.

We use a custom shell script to do deployments. The script launches the new binaries and then checks to make sure that the new process is running. If it’s not, the script assumes that something has gone wrong and automatically rolls back to the previous version. Even if the previous version also can’t come up, we still won’t have a customer impact while we troubleshoot because the other machines in the cluster can still service requests.

Scaling

For a search engine, there are two basic dimensions of scaling:

  • Search capacity - how many searches can be performed?
  • Storage capacity - how many records can the index hold?

To increase your search capacity with Algolia, you can replicate your data to additional clusters using the point-and-click DSN feature. Once a new DSN cluster is provisioned and brought up-to-date with data, it will automatically begin to process queries.

Scaling storage capacity is a bit more complicated.

Multiple Clusters

Today, Algolia customers who cannot fit on one cluster need to provision a separate cluster and create logic at the application layer to balance between them. This is often needed by SaaS companies who have customers growing at different rates, and sometimes one customer can be 10x or 100x compared to the others, so you need to move that customer to somewhere they can fit.

Soon we’ll be releasing a feature that takes this complexity behind the API. Algolia will automatically balance data a customer’s available clusters based on a few key pieces of information. The way it works is similar to sharding but without the limitation of shards being pinned to a specific node. Shards can be moved between clusters dynamically. This avoids a very serious problem encountered by many search engines - if the original shard key guess was wrong, the entire cluster will have to be rebuilt down the road.

Collaboration

Our humans and our bots congregate on Slack. Last year we had some growing pains, but now we have a prefix-based naming convention that works pretty well. Our channels are named #team-engineering, #help-engineering, #notif-github, etc.. The #team- channels are for members of a team, #help- channels are for getting help from a team, and #notif- channels are for collecting automatic notifications.


Algolia Zoom Room


It would be hard to count the number of Zoom meetings we have on a given day. Our two main offices are in Paris and San Francisco, making 7am-10am PST the busiest time of day for video calls. We now have dedicated "Zoom Rooms" with iPads, high-resolution cameras and big TVs that make the experience really smooth. With new offices in New York and Atlanta, Zoom will become an even more important part of our collaboration stack which also includes Github, Trello and Asana.

Team

When you're an API, performance and scalability are customer-facing features. The work that our engineers do directly affects the 15,000+ developers that rely on our API. Being developers ourselves, we’re very passionate about open source and staying active with our community.


Algolia values


We’re hiring! Come help us make building search a rewarding experience. Algolia teammates come from a diverse range of backgrounds and 15 different countries. Our values are Care, Humility, Trust, Candor and Grit. Employees are encouraged to travel to different offices - Paris, San Francisco, or now Atlanta - at least once a year, to build strong personal connections inside of the company.

See our open positions on StackShare.

Questions about our stack? We love to talk tech. Comment below or ask us on our Discourse forum.

Thanks to Julien Lemoine, Adam Surak, Rémy-Christophe Schermesser, Jason Harris and Raphael Terrier for their much-appreciated help on this post.

Algolia
Developer-friendly hosted search service. API clients for all major frameworks and languages. REST, JSON & detailed documentation.
Tools mentioned in article
Open jobs at Algolia
IT Engineer/Senior IT Engineer
San Francisco
Algolia is an explosively growing, post-series C, SaaS company.  We’re in the process of hyper-scaling while building a stronger operations organization to support this continued growth.  We are looking for a new junior IT engineer for an office position in the Bay area.  This is an advancement position that will be split 80/20 in EUC and application development.  This position may have up to 10% travel. The Algolia IT team cares about the needs of our organization and the satisfaction of our end users.  We are candid in communicating what needs to be done and we have the grit to see myriad tasks and projects completed during daily and quarterly timeframes.  We trust each other, demonstrate respect for each other and are willing to ask for and provide help to each other.
  • EUC Responsibilities:
  • Serve as the first point of contact for IT requests in all regions. Administrate the EUC tech stack with specialization in Gsuite, Okta, Atlassian, Zoom, and Slack. Deploy and sunset devices for end-users in the NA and APAC regions (macOS, Windows, Linux, and all sorts of mobile devices).Inventory and maintain IT asset data (serial numbers, software keys, locations, etc.).Create documentation to enable end-users to interact with our hardware and software systems.

  • Engineering Responsibilities:
  • Manage asset and service relationships and associated contracts with major vendors. Study and master the Algolia user tech stack, inside and outside of IT, including both fine-level details and overarching strategic goals. Coordinate across Algolia teams to design and manage effective and efficient internal SaaS deployments. Ally with Security and Infrastructure teams to maintain the safety of our people and products. Engage in the development of less experienced engineers.
  • Requirements:
  • A BS in Computer Science or its equivalent in continuing education (Masters for Senior).
  • 3+ years of technical support experience.
  • 7+ years managing IT environments
  • Expertise with macOS and Windows as well as familiarity with Linux.
  • Authorization to work in the United States.

  • Critical traits:
  • You learn fast and retain well.You prioritize tasks effectively and self-initiate completion.You are flexible when problems occur.You are adaptable in a fast-changing environment.You can demonstrate excellent communication skills.
  • GRIT - Problem-solving and perseverance capability in an ever-changing and growing environment
  • TRUST - Willingness to trust our co-workers and to take ownership
  • CANDOR - Ability to receive and give constructive feedback
  • CARE - Genuine care about other team members, our clients and the decisions we make in the company
  • HUMILITY- Aptitude for learning from others, putting ego aside.
  • Health, dental, and vision benefits for you and your family
  • Life insurance and disability benefits
  • Relocation support
  • 401(k) plan with employer matching
  • Flexible work hours and flexible time off policy
  • Competitive pay and stock options
  • Charitable contribution matching 
  • Workout Wednesdays w/ personal trainer via Zoom
  • Weekly meditation sessions via Zoom
  • Monthly Wellness Days
  • Verified by
    Marketing Specialist
    Engineering Lead
    Co-Founder & CEO
    Software Engineer
    Software Engineer
    Front-end developer
    Software engineer
    Frontend Engineer
    Frontend Developer
    Information Technology
    Content & Education
    VP of Engineering
    Software engineer
    Customer Solutions Engineer
    Software Engineer
    Senior JavaScript Engineer
    You may also like