Alternatives to Zookeeper logo

Alternatives to Zookeeper

Consul, etcd, Yarn, Eureka, and Ambari are the most popular alternatives and competitors to Zookeeper.
736
982
+ 1
43

What is Zookeeper and what are its top alternatives?

A centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. All of these kinds of services are used in some form or another by distributed applications.
Zookeeper is a tool in the Open Source Service Discovery category of a tech stack.

Top Alternatives to Zookeeper

  • Consul
    Consul

    Consul is a tool for service discovery and configuration. Consul is distributed, highly available, and extremely scalable. ...

  • etcd
    etcd

    etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines. It’s open-source and available on GitHub. etcd gracefully handles master elections during network partitions and will tolerate machine failure, including the master. ...

  • Yarn
    Yarn

    Yarn caches every package it downloads so it never needs to again. It also parallelizes operations to maximize resource utilization so install times are faster than ever. ...

  • Eureka
    Eureka

    Eureka is a REST (Representational State Transfer) based service that is primarily used in the AWS cloud for locating services for the purpose of load balancing and failover of middle-tier servers. ...

  • Ambari
    Ambari

    This project is aimed at making Hadoop management simpler by developing software for provisioning, managing, and monitoring Apache Hadoop clusters. It provides an intuitive, easy-to-use Hadoop management web UI backed by its RESTful APIs. ...

  • Kafka
    Kafka

    Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...

  • Redis
    Redis

    Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Redis provides data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes, and streams. ...

  • Kubernetes
    Kubernetes

    Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. ...

Zookeeper alternatives & related posts

Consul logo

Consul

1.1K
1.5K
210
A tool for service discovery, monitoring and configuration
1.1K
1.5K
+ 1
210
PROS OF CONSUL
  • 60
    Great service discovery infrastructure
  • 35
    Health checking
  • 29
    Distributed key-value store
  • 26
    Monitoring
  • 23
    High-availability
  • 12
    Web-UI
  • 10
    Token-based acls
  • 6
    Gossip clustering
  • 5
    Dns server
  • 3
    Not Java
  • 1
    Docker integration
CONS OF CONSUL
    Be the first to leave a con

    related Consul posts

    John Kodumal

    As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

    We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

    See more

    Since the beginning, Cal Henderson has been the CTO of Slack. Earlier this year, he commented on a Quora question summarizing their current stack.

    Apps
    • Web: a mix of JavaScript/ES6 and React.
    • Desktop: And Electron to ship it as a desktop application.
    • Android: a mix of Java and Kotlin.
    • iOS: written in a mix of Objective C and Swift.
    Backend
    • The core application and the API written in PHP/Hack that runs on HHVM.
    • The data is stored in MySQL using Vitess.
    • Caching is done using Memcached and MCRouter.
    • The search service takes help from SolrCloud, with various Java services.
    • The messaging system uses WebSockets with many services in Java and Go.
    • Load balancing is done using HAproxy with Consul for configuration.
    • Most services talk to each other over gRPC,
    • Some Thrift and JSON-over-HTTP
    • Voice and video calling service was built in Elixir.
    Data warehouse
    • Built using open source tools including Presto, Spark, Airflow, Hadoop and Kafka.
    Etc
    See more
    etcd logo

    etcd

    275
    399
    24
    A distributed consistent key-value store for shared configuration and service discovery
    275
    399
    + 1
    24
    PROS OF ETCD
    • 11
      Service discovery
    • 6
      Fault tolerant key value store
    • 2
      Secure
    • 2
      Bundled with coreos
    • 1
      Consol integration
    • 1
      Privilege Access Management
    • 1
      Open Source
    CONS OF ETCD
      Be the first to leave a con

      related etcd posts

      Yarn logo

      Yarn

      21.7K
      12.6K
      151
      A new package manager for JavaScript
      21.7K
      12.6K
      + 1
      151
      PROS OF YARN
      • 85
        Incredibly fast
      • 22
        Easy to use
      • 13
        Open Source
      • 11
        Can install any npm package
      • 8
        Works where npm fails
      • 7
        Workspaces
      • 3
        Incomplete to run tasks
      • 2
        Fast
      CONS OF YARN
      • 16
        Facebook
      • 7
        Sends data to facebook
      • 4
        Should be installed separately
      • 3
        Cannot publish to registry other than npm

      related Yarn posts

      Simon Reymann
      Senior Fullstack Developer at QUANTUSflow Software GmbH · | 27 upvotes · 4.1M views

      Our whole Node.js backend stack consists of the following tools:

      • Lerna as a tool for multi package and multi repository management
      • npm as package manager
      • NestJS as Node.js framework
      • TypeScript as programming language
      • ExpressJS as web server
      • Swagger UI for visualizing and interacting with the API’s resources
      • Postman as a tool for API development
      • TypeORM as object relational mapping layer
      • JSON Web Token for access token management

      The main reason we have chosen Node.js over PHP is related to the following artifacts:

      • Made for the web and widely in use: Node.js is a software platform for developing server-side network services. Well-known projects that rely on Node.js include the blogging software Ghost, the project management tool Trello and the operating system WebOS. Node.js requires the JavaScript runtime environment V8, which was specially developed by Google for the popular Chrome browser. This guarantees a very resource-saving architecture, which qualifies Node.js especially for the operation of a web server. Ryan Dahl, the developer of Node.js, released the first stable version on May 27, 2009. He developed Node.js out of dissatisfaction with the possibilities that JavaScript offered at the time. The basic functionality of Node.js has been mapped with JavaScript since the first version, which can be expanded with a large number of different modules. The current package managers (npm or Yarn) for Node.js know more than 1,000,000 of these modules.
      • Fast server-side solutions: Node.js adopts the JavaScript "event-loop" to create non-blocking I/O applications that conveniently serve simultaneous events. With the standard available asynchronous processing within JavaScript/TypeScript, highly scalable, server-side solutions can be realized. The efficient use of the CPU and the RAM is maximized and more simultaneous requests can be processed than with conventional multi-thread servers.
      • A language along the entire stack: Widely used frameworks such as React or AngularJS or Vue.js, which we prefer, are written in JavaScript/TypeScript. If Node.js is now used on the server side, you can use all the advantages of a uniform script language throughout the entire application development. The same language in the back- and frontend simplifies the maintenance of the application and also the coordination within the development team.
      • Flexibility: Node.js sets very few strict dependencies, rules and guidelines and thus grants a high degree of flexibility in application development. There are no strict conventions so that the appropriate architecture, design structures, modules and features can be freely selected for the development.
      See more
      Johnny Bell

      So when starting a new project you generally have your go to tools to get your site up and running locally, and some scripts to build out a production version of your site. Create React App is great for that, however for my projects I feel as though there is to much bloat in Create React App and if I use it, then I'm tied to React, which I love but if I want to switch it up to Vue or something I want that flexibility.

      So to start everything up and running I clone my personal Webpack boilerplate - This is still in Webpack 3, and does need some updating but gets the job done for now. So given the name of the repo you may have guessed that yes I am using Webpack as my bundler I use Webpack because it is so powerful, and even though it has a steep learning curve once you get it, its amazing.

      The next thing I do is make sure my machine has Node.js configured and the right version installed then run Yarn. I decided to use Yarn because when I was building out this project npm had some shortcomings such as no .lock file. I could probably move from Yarn to npm but I don't really see any point really.

      I use Babel to transpile all of my #ES6 to #ES5 so the browser can read it, I love Babel and to be honest haven't looked up any other transpilers because Babel is amazing.

      Finally when developing I have Prettier setup to make sure all my code is clean and uniform across all my JS files, and ESLint to make sure I catch any errors or code that could be optimized.

      I'm really happy with this stack for my local env setup, and I'll probably stick with it for a while.

      See more
      Eureka logo

      Eureka

      283
      753
      69
      AWS Service registry for resilient mid-tier load balancing and failover.
      283
      753
      + 1
      69
      PROS OF EUREKA
      • 21
        Easy setup and integration with spring-cloud
      • 9
        Web ui
      • 8
        Monitoring
      • 8
        Health checking
      • 7
        Circuit breaker
      • 6
        Netflix battle tested components
      • 6
        Service discovery
      • 4
        Open Source
      CONS OF EUREKA
        Be the first to leave a con

        related Eureka posts

        Ambari logo

        Ambari

        41
        73
        1
        A software for provisioning, managing, and monitoring Apache Hadoop clusters
        41
        73
        + 1
        1
        PROS OF AMBARI
        • 1
          Ease of use
        CONS OF AMBARI
          Be the first to leave a con

          related Ambari posts

          Kafka logo

          Kafka

          22K
          20.8K
          605
          Distributed, fault tolerant, high throughput pub-sub messaging system
          22K
          20.8K
          + 1
          605
          PROS OF KAFKA
          • 126
            High-throughput
          • 119
            Distributed
          • 92
            Scalable
          • 86
            High-Performance
          • 66
            Durable
          • 38
            Publish-Subscribe
          • 19
            Simple-to-use
          • 18
            Open source
          • 12
            Written in Scala and java. Runs on JVM
          • 8
            Message broker + Streaming system
          • 4
            Robust
          • 4
            KSQL
          • 4
            Avro schema integration
          • 3
            Suport Multiple clients
          • 2
            Partioned, replayable log
          • 1
            Flexible
          • 1
            Extremely good parallelism constructs
          • 1
            Fun
          • 1
            Simple publisher / multi-subscriber model
          CONS OF KAFKA
          • 32
            Non-Java clients are second-class citizens
          • 29
            Needs Zookeeper
          • 9
            Operational difficulties
          • 4
            Terrible Packaging

          related Kafka posts

          Eric Colson
          Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 5.3M views

          The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

          Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

          At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

          For more info:

          #DataScience #DataStack #Data

          See more
          John Kodumal

          As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

          We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

          See more
          Redis logo

          Redis

          55.8K
          43.2K
          3.9K
          Open source (BSD licensed), in-memory data structure store
          55.8K
          43.2K
          + 1
          3.9K
          PROS OF REDIS
          • 884
            Performance
          • 541
            Super fast
          • 512
            Ease of use
          • 443
            In-memory cache
          • 323
            Advanced key-value cache
          • 193
            Open source
          • 182
            Easy to deploy
          • 164
            Stable
          • 155
            Free
          • 121
            Fast
          • 42
            High-Performance
          • 40
            High Availability
          • 34
            Data Structures
          • 32
            Very Scalable
          • 24
            Replication
          • 22
            Pub/Sub
          • 22
            Great community
          • 19
            "NoSQL" key-value data store
          • 15
            Hashes
          • 13
            Sets
          • 11
            Sorted Sets
          • 10
            Lists
          • 9
            BSD licensed
          • 9
            NoSQL
          • 8
            Integrates super easy with Sidekiq for Rails background
          • 8
            Async replication
          • 8
            Bitmaps
          • 7
            Open Source
          • 7
            Keys with a limited time-to-live
          • 6
            Lua scripting
          • 6
            Strings
          • 5
            Awesomeness for Free
          • 5
            Hyperloglogs
          • 4
            Written in ANSI C
          • 4
            LRU eviction of keys
          • 4
            Networked
          • 4
            Outstanding performance
          • 4
            Runs server side LUA
          • 4
            Transactions
          • 4
            Feature Rich
          • 3
            Performance & ease of use
          • 3
            Data structure server
          • 2
            Object [key/value] size each 500 MB
          • 2
            Simple
          • 2
            Scalable
          • 2
            Temporarily kept on disk
          • 2
            Dont save data if no subscribers are found
          • 2
            Automatic failover
          • 2
            Easy to use
          • 2
            Existing Laravel Integration
          • 2
            Channels concept
          CONS OF REDIS
          • 15
            Cannot query objects directly
          • 3
            No secondary indexes for non-numeric data types
          • 1
            No WAL

          related Redis posts

          Robert Zuber

          We use MongoDB as our primary #datastore. Mongo's approach to replica sets enables some fantastic patterns for operations like maintenance, backups, and #ETL.

          As we pull #microservices from our #monolith, we are taking the opportunity to build them with their own datastores using PostgreSQL. We also use Redis to cache data we’d never store permanently, and to rate-limit our requests to partners’ APIs (like GitHub).

          When we’re dealing with large blobs of immutable data (logs, artifacts, and test results), we store them in Amazon S3. We handle any side-effects of S3’s eventual consistency model within our own code. This ensures that we deal with user requests correctly while writes are in process.

          See more

          I'm working as one of the engineering leads in RunaHR. As our platform is a Saas, we thought It'd be good to have an API (We chose Ruby and Rails for this) and a SPA (built with React and Redux ) connected. We started the SPA with Create React App since It's pretty easy to start.

          We use Jest as the testing framework and react-testing-library to test React components. In Rails we make tests using RSpec.

          Our main database is PostgreSQL, but we also use MongoDB to store some type of data. We started to use Redis  for cache and other time sensitive operations.

          We have a couple of extra projects: One is an Employee app built with React Native and the other is an internal back office dashboard built with Next.js for the client and Python in the backend side.

          Since we have different frontend apps we have found useful to have Bit to document visual components and utils in JavaScript.

          See more
          Kubernetes logo

          Kubernetes

          55.9K
          48.4K
          673
          Manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops
          55.9K
          48.4K
          + 1
          673
          PROS OF KUBERNETES
          • 164
            Leading docker container management solution
          • 128
            Simple and powerful
          • 106
            Open source
          • 76
            Backed by google
          • 58
            The right abstractions
          • 25
            Scale services
          • 20
            Replication controller
          • 11
            Permission managment
          • 8
            Cheap
          • 8
            Supports autoscaling
          • 8
            Simple
          • 5
            No cloud platform lock-in
          • 5
            Reliable
          • 5
            Self-healing
          • 4
            Quick cloud setup
          • 4
            Promotes modern/good infrascture practice
          • 4
            Scalable
          • 4
            Open, powerful, stable
          • 3
            Runs on azure
          • 3
            Captain of Container Ship
          • 3
            Cloud Agnostic
          • 3
            Custom and extensibility
          • 3
            Backed by Red Hat
          • 3
            A self healing environment with rich metadata
          • 2
            Gke
          • 2
            Everything of CaaS
          • 2
            Sfg
          • 2
            Expandable
          • 2
            Golang
          • 2
            Easy setup
          CONS OF KUBERNETES
          • 15
            Poor workflow for development
          • 15
            Steep learning curve
          • 8
            Orchestrates only infrastructure
          • 4
            High resource requirements for on-prem clusters
          • 2
            Too heavy for simple systems
          • 1
            Additional vendor lock-in (Docker)
          • 1
            More moving parts to secure
          • 1
            Additional Technology Overhead

          related Kubernetes posts

          Conor Myhrvold
          Tech Brand Mgr, Office of CTO at Uber · | 43 upvotes · 7.4M views

          How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:

          Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.

          Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:

          https://eng.uber.com/distributed-tracing/

          (GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)

          Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark

          See more
          Yshay Yaacobi

          Our first experience with .NET core was when we developed our OSS feature management platform - Tweek (https://github.com/soluto/tweek). We wanted to create a solution that is able to run anywhere (super important for OSS), has excellent performance characteristics and can fit in a multi-container architecture. We decided to implement our rule engine processor in F# , our main service was implemented in C# and other components were built using JavaScript / TypeScript and Go.

          Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.

          After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...

          See more