Alternatives to Kafka Streams logo

Alternatives to Kafka Streams

Kafka, Apache Spark, Apache Flink, Apache Beam, and Apache Storm are the most popular alternatives and competitors to Kafka Streams.
319
397
+ 1
0

What is Kafka Streams and what are its top alternatives?

It is a client library for building applications and microservices, where the input and output data are stored in Kafka clusters. It combines the simplicity of writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka's server-side cluster technology.
Kafka Streams is a tool in the Stream Processing category of a tech stack.

Top Alternatives to Kafka Streams

  • Kafka
    Kafka

    Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...

  • Apache Spark
    Apache Spark

    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. ...

  • Apache Flink
    Apache Flink

    Apache Flink is an open source system for fast and versatile data analytics in clusters. Flink supports batch and streaming analytics, in one system. Analytical programs can be written in concise and elegant APIs in Java and Scala. ...

  • Apache Beam
    Apache Beam

    It implements batch and streaming data processing jobs that run on any execution engine. It executes pipelines on multiple execution environments. ...

  • Apache Storm
    Apache Storm

    Apache Storm is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate. ...

  • KSQL
    KSQL

    KSQL is an open source streaming SQL engine for Apache Kafka. It provides a simple and completely interactive SQL interface for stream processing on Kafka; no need to write code in a programming language such as Java or Python. KSQL is open-source (Apache 2.0 licensed), distributed, scalable, reliable, and real-time. ...

  • Samza
    Samza

    It allows you to build stateful applications that process data in real-time from multiple sources including Apache Kafka. ...

  • Apache NiFi
    Apache NiFi

    An easy to use, powerful, and reliable system to process and distribute data. It supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic. ...

Kafka Streams alternatives & related posts

Kafka logo

Kafka

18.2K
17.4K
593
Distributed, fault tolerant, high throughput pub-sub messaging system
18.2K
17.4K
+ 1
593
PROS OF KAFKA
  • 125
    High-throughput
  • 119
    Distributed
  • 89
    Scalable
  • 83
    High-Performance
  • 65
    Durable
  • 37
    Publish-Subscribe
  • 19
    Simple-to-use
  • 17
    Open source
  • 11
    Written in Scala and java. Runs on JVM
  • 8
    Message broker + Streaming system
  • 4
    Avro schema integration
  • 4
    Robust
  • 4
    KSQL
  • 2
    Suport Multiple clients
  • 2
    Partioned, replayable log
  • 1
    Flexible
  • 1
    Extremely good parallelism constructs
  • 1
    Simple publisher / multi-subscriber model
  • 1
    Fun
CONS OF KAFKA
  • 29
    Non-Java clients are second-class citizens
  • 27
    Needs Zookeeper
  • 7
    Operational difficulties
  • 2
    Terrible Packaging

related Kafka posts

Eric Colson
Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 2.3M views

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
John Kodumal

As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

See more
Apache Spark logo

Apache Spark

2.6K
3.1K
137
Fast and general engine for large-scale data processing
2.6K
3.1K
+ 1
137
PROS OF APACHE SPARK
  • 59
    Open-source
  • 48
    Fast and Flexible
  • 8
    One platform for every big data problem
  • 7
    Great for distributed SQL like applications
  • 6
    Easy to install and to use
  • 3
    Works well for most Datascience usecases
  • 2
    Interactive Query
  • 2
    In memory Computation
  • 2
    Machine learning libratimery, Streaming in real
CONS OF APACHE SPARK
  • 3
    Speed

related Apache Spark posts

Eric Colson
Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 2.3M views

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
Conor Myhrvold
Tech Brand Mgr, Office of CTO at Uber · | 7 upvotes · 1.1M views

Why we built Marmaray, an open source generic data ingestion and dispersal framework and library for Apache Hadoop :

Built and designed by our Hadoop Platform team, Marmaray is a plug-in-based framework built on top of the Hadoop ecosystem. Users can add support to ingest data from any source and disperse to any sink leveraging the use of Apache Spark . The name, Marmaray, comes from a tunnel in Turkey connecting Europe and Asia. Similarly, we envisioned Marmaray within Uber as a pipeline connecting data from any source to any sink depending on customer preference:

https://eng.uber.com/marmaray-hadoop-ingestion-open-source/

(Direct GitHub repo: https://github.com/uber/marmaray Kafka Kafka Manager )

See more
Apache Flink logo

Apache Flink

448
735
36
Fast and reliable large-scale data processing engine
448
735
+ 1
36
PROS OF APACHE FLINK
  • 15
    Unified batch and stream processing
  • 8
    Easy to use streaming apis
  • 8
    Out-of-the box connector to kinesis,s3,hdfs
  • 3
    Open Source
  • 2
    Low latency
CONS OF APACHE FLINK
    Be the first to leave a con

    related Apache Flink posts

    Surabhi Bhawsar
    Technical Architect at Pepcus · | 7 upvotes · 588.3K views
    Shared insights
    on
    KafkaKafkaApache FlinkApache Flink

    I need to build the Alert & Notification framework with the use of a scheduled program. We will analyze the events from the database table and filter events that are falling under a day timespan and send these event messages over email. Currently, we are using Kafka Pub/Sub for messaging. The customer wants us to move on Apache Flink, I am trying to understand how Apache Flink could be fit better for us.

    See more

    I have to build a data processing application with an Apache Beam stack and Apache Flink runner on an Amazon EMR cluster. I saw some instability with the process and EMR clusters that keep going down. Here, the Apache Beam application gets inputs from Kafka and sends the accumulative data streams to another Kafka topic. Any advice on how to make the process more stable?

    See more
    Apache Beam logo

    Apache Beam

    165
    324
    14
    A unified programming model
    165
    324
    + 1
    14
    PROS OF APACHE BEAM
    • 5
      Open-source
    • 5
      Cross-platform
    • 2
      Portable
    • 2
      Unified batch and stream processing
    CONS OF APACHE BEAM
      Be the first to leave a con

      related Apache Beam posts

      I have to build a data processing application with an Apache Beam stack and Apache Flink runner on an Amazon EMR cluster. I saw some instability with the process and EMR clusters that keep going down. Here, the Apache Beam application gets inputs from Kafka and sends the accumulative data streams to another Kafka topic. Any advice on how to make the process more stable?

      See more
      Apache Storm logo

      Apache Storm

      185
      273
      24
      Distributed and fault-tolerant realtime computation
      185
      273
      + 1
      24
      PROS OF APACHE STORM
      • 10
        Flexible
      • 6
        Easy setup
      • 3
        Clojure
      • 3
        Event Processing
      • 2
        Real Time
      CONS OF APACHE STORM
        Be the first to leave a con

        related Apache Storm posts

        Marc Bollinger
        Infra & Data Eng Manager at Thumbtack · | 5 upvotes · 514.1K views

        Lumosity is home to the world's largest cognitive training database, a responsibility we take seriously. For most of the company's history, our analysis of user behavior and training data has been powered by an event stream--first a simple Node.js pub/sub app, then a heavyweight Ruby app with stronger durability. Both supported decent throughput and latency, but they lacked some major features supported by existing open-source alternatives: replaying existing messages (also lacking in most message queue-based solutions), scaling out many different readers for the same stream, the ability to leverage existing solutions for reading and writing, and possibly most importantly: the ability to hire someone externally who already had expertise.

        We ultimately migrated to Kafka in early- to mid-2016, citing both industry trends in companies we'd talked to with similar durability and throughput needs, the extremely strong documentation and community. We pored over Kyle Kingsbury's Jepsen post (https://aphyr.com/posts/293-jepsen-Kafka), as well as Jay Kreps' follow-up (http://blog.empathybox.com/post/62279088548/a-few-notes-on-kafka-and-jepsen), talked at length with Confluent folks and community members, and still wound up running parallel systems for quite a long time, but ultimately, we've been very, very happy. Understanding the internals and proper levers takes some commitment, but it's taken very little maintenance once configured. Since then, the Confluent Platform community has grown and grown; we've gone from doing most development using custom Scala consumers and producers to being 60/40 Kafka Streams/Connects.

        We originally looked into Storm / Heron , and we'd moved on from Redis pub/sub. Heron looks great, but we already had a programming model across services that was more akin to consuming a message consumers than required a topology of bolts, etc. Heron also had just come out while we were starting to migrate things, and the community momentum and direction of Kafka felt more substantial than the older Storm. If we were to start the process over again today, we might check out Pulsar , although the ecosystem is much younger.

        To find out more, read our 2017 engineering blog post about the migration!

        See more
        KSQL logo

        KSQL

        48
        105
        5
        Open source streaming SQL for Apache Kafka
        48
        105
        + 1
        5
        PROS OF KSQL
        • 3
          Streamprocessing on Kafka
        • 2
          SQL syntax with windowing functions over streams
        • 0
          Easy transistion for SQL Devs
        CONS OF KSQL
          Be the first to leave a con

          related KSQL posts

          I have recently started using Confluent/Kafka cloud. We want to do some stream processing. As I was going through Kafka I came across Kafka Streams and KSQL. Both seem to be A good fit for stream processing. But I could not understand which one should be used and one has any advantage over another. We will be using Confluent/Kafka Managed Cloud Instance. In near future, our Producers and Consumers are running on premise and we will be interacting with Confluent Cloud.

          Also, Confluent Cloud Kafka has a primitive interface; is there any better UI interface to manage Kafka Cloud Cluster?

          See more
          Samza logo

          Samza

          17
          51
          0
          A distributed stream processing framework
          17
          51
          + 1
          0
          PROS OF SAMZA
            Be the first to leave a pro
            CONS OF SAMZA
              Be the first to leave a con

              related Samza posts

              Apache NiFi logo

              Apache NiFi

              292
              590
              62
              A reliable system to process and distribute data
              292
              590
              + 1
              62
              PROS OF APACHE NIFI
              • 15
                Visual Data Flows using Directed Acyclic Graphs (DAGs)
              • 8
                Free (Open Source)
              • 7
                Simple-to-use
              • 5
                Reactive with back-pressure
              • 5
                Scalable horizontally as well as vertically
              • 4
                Fast prototyping
              • 3
                Bi-directional channels
              • 2
                Data provenance
              • 2
                Built-in graphical user interface
              • 2
                End-to-end security between all nodes
              • 2
                Can handle messages up to gigabytes in size
              • 1
                Hbase support
              • 1
                Kudu support
              • 1
                Hive support
              • 1
                Slack integration
              • 1
                Support for custom Processor in Java
              • 1
                Lot of articles
              • 1
                Lots of documentation
              CONS OF APACHE NIFI
              • 2
                HA support is not full fledge
              • 2
                Memory-intensive

              related Apache NiFi posts

              I am looking for the best tool to orchestrate #ETL workflows in non-Hadoop environments, mainly for regression testing use cases. Would Airflow or Apache NiFi be a good fit for this purpose?

              For example, I want to run an Informatica ETL job and then run an SQL task as a dependency, followed by another task from Jira. What tool is best suited to set up such a pipeline?

              See more