Alternatives to Amazon Kinesis logo

Alternatives to Amazon Kinesis

Kafka, Apache Spark, Amazon SQS, Amazon Kinesis Firehose, and Firehose.io are the most popular alternatives and competitors to Amazon Kinesis.
786
591
+ 1
9

What is Amazon Kinesis and what are its top alternatives?

Amazon Kinesis can collect and process hundreds of gigabytes of data per second from hundreds of thousands of sources, allowing you to easily write applications that process information in real-time, from sources such as web site click-streams, marketing and financial information, manufacturing instrumentation and social media, and operational logs and metering data.
Amazon Kinesis is a tool in the Real-time Data Processing category of a tech stack.

Top Alternatives to Amazon Kinesis

  • Kafka
    Kafka

    Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...

  • Apache Spark
    Apache Spark

    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. ...

  • Amazon SQS
    Amazon SQS

    Transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be always available. With SQS, you can offload the administrative burden of operating and scaling a highly available messaging cluster, while paying a low price for only what you use. ...

  • Amazon Kinesis Firehose
    Amazon Kinesis Firehose

    Amazon Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture and automatically load streaming data into Amazon S3 and Amazon Redshift, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today. ...

  • Firehose.io
    Firehose.io

    Firehose is both a Rack application and JavaScript library that makes building real-time web applications possible. ...

  • Apache Storm
    Apache Storm

    Apache Storm is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate. ...

  • Confluent
    Confluent

    It is a data streaming platform based on Apache Kafka: a full-scale streaming platform, capable of not only publish-and-subscribe, but also the storage and processing of data within the stream ...

  • Google Cloud Dataflow
    Google Cloud Dataflow

    Google Cloud Dataflow is a unified programming model and a managed service for developing and executing a wide range of data processing patterns including ETL, batch computation, and continuous computation. Cloud Dataflow frees you from operational tasks like resource management and performance optimization. ...

Amazon Kinesis alternatives & related posts

Kafka logo

Kafka

22.2K
20.8K
605
Distributed, fault tolerant, high throughput pub-sub messaging system
22.2K
20.8K
+ 1
605
PROS OF KAFKA
  • 126
    High-throughput
  • 119
    Distributed
  • 92
    Scalable
  • 86
    High-Performance
  • 66
    Durable
  • 38
    Publish-Subscribe
  • 19
    Simple-to-use
  • 18
    Open source
  • 12
    Written in Scala and java. Runs on JVM
  • 8
    Message broker + Streaming system
  • 4
    Robust
  • 4
    KSQL
  • 4
    Avro schema integration
  • 3
    Suport Multiple clients
  • 2
    Partioned, replayable log
  • 1
    Flexible
  • 1
    Extremely good parallelism constructs
  • 1
    Fun
  • 1
    Simple publisher / multi-subscriber model
CONS OF KAFKA
  • 32
    Non-Java clients are second-class citizens
  • 29
    Needs Zookeeper
  • 9
    Operational difficulties
  • 4
    Terrible Packaging

related Kafka posts

Eric Colson
Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 5.3M views

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
John Kodumal

As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

See more
Apache Spark logo

Apache Spark

3.1K
3.4K
139
Fast and general engine for large-scale data processing
3.1K
3.4K
+ 1
139
PROS OF APACHE SPARK
  • 60
    Open-source
  • 48
    Fast and Flexible
  • 8
    One platform for every big data problem
  • 8
    Great for distributed SQL like applications
  • 6
    Easy to install and to use
  • 3
    Works well for most Datascience usecases
  • 2
    Interactive Query
  • 2
    In memory Computation
  • 2
    Machine learning libratimery, Streaming in real
CONS OF APACHE SPARK
  • 4
    Speed

related Apache Spark posts

Eric Colson
Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 5.3M views

The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

For more info:

#DataScience #DataStack #Data

See more
Conor Myhrvold
Tech Brand Mgr, Office of CTO at Uber · | 7 upvotes · 2.6M views

Why we built Marmaray, an open source generic data ingestion and dispersal framework and library for Apache Hadoop :

Built and designed by our Hadoop Platform team, Marmaray is a plug-in-based framework built on top of the Hadoop ecosystem. Users can add support to ingest data from any source and disperse to any sink leveraging the use of Apache Spark . The name, Marmaray, comes from a tunnel in Turkey connecting Europe and Asia. Similarly, we envisioned Marmaray within Uber as a pipeline connecting data from any source to any sink depending on customer preference:

https://eng.uber.com/marmaray-hadoop-ingestion-open-source/

(Direct GitHub repo: https://github.com/uber/marmaray Kafka Kafka Manager )

See more
Amazon SQS logo

Amazon SQS

2.9K
1.9K
171
Fully managed message queuing service
2.9K
1.9K
+ 1
171
PROS OF AMAZON SQS
  • 62
    Easy to use, reliable
  • 40
    Low cost
  • 28
    Simple
  • 14
    Doesn't need to maintain it
  • 8
    It is Serverless
  • 4
    Has a max message size (currently 256K)
  • 3
    Triggers Lambda
  • 3
    Easy to configure with Terraform
  • 3
    Delayed delivery upto 15 mins only
  • 3
    Delayed delivery upto 12 hours
  • 1
    JMS compliant
  • 1
    Support for retry and dead letter queue
  • 1
    D
CONS OF AMAZON SQS
  • 2
    Has a max message size (currently 256K)
  • 2
    Proprietary
  • 2
    Difficult to configure
  • 1
    Has a maximum 15 minutes of delayed messages only

related Amazon SQS posts

Praveen Mooli
Engineering Manager at Taylor and Francis · | 18 upvotes · 3.1M views

We are in the process of building a modern content platform to deliver our content through various channels. We decided to go with Microservices architecture as we wanted scale. Microservice architecture style is an approach to developing an application as a suite of small independently deployable services built around specific business capabilities. You can gain modularity, extensive parallelism and cost-effective scaling by deploying services across many distributed servers. Microservices modularity facilitates independent updates/deployments, and helps to avoid single point of failure, which can help prevent large-scale outages. We also decided to use Event Driven Architecture pattern which is a popular distributed asynchronous architecture pattern used to produce highly scalable applications. The event-driven architecture is made up of highly decoupled, single-purpose event processing components that asynchronously receive and process events.

To build our #Backend capabilities we decided to use the following: 1. #Microservices - Java with Spring Boot , Node.js with ExpressJS and Python with Flask 2. #Eventsourcingframework - Amazon Kinesis , Amazon Kinesis Firehose , Amazon SNS , Amazon SQS, AWS Lambda 3. #Data - Amazon RDS , Amazon DynamoDB , Amazon S3 , MongoDB Atlas

To build #Webapps we decided to use Angular 2 with RxJS

#Devops - GitHub , Travis CI , Terraform , Docker , Serverless

See more
Tim Specht
‎Co-Founder and CTO at Dubsmash · | 14 upvotes · 737K views

In order to accurately measure & track user behaviour on our platform we moved over quickly from the initial solution using Google Analytics to a custom-built one due to resource & pricing concerns we had.

While this does sound complicated, it’s as easy as clients sending JSON blobs of events to Amazon Kinesis from where we use AWS Lambda & Amazon SQS to batch and process incoming events and then ingest them into Google BigQuery. Once events are stored in BigQuery (which usually only takes a second from the time the client sends the data until it’s available), we can use almost-standard-SQL to simply query for data while Google makes sure that, even with terabytes of data being scanned, query times stay in the range of seconds rather than hours. Before ingesting their data into the pipeline, our mobile clients are aggregating events internally and, once a certain threshold is reached or the app is going to the background, sending the events as a JSON blob into the stream.

In the past we had workers running that continuously read from the stream and would validate and post-process the data and then enqueue them for other workers to write them to BigQuery. We went ahead and implemented the Lambda-based approach in such a way that Lambda functions would automatically be triggered for incoming records, pre-aggregate events, and write them back to SQS, from which we then read them, and persist the events to BigQuery. While this approach had a couple of bumps on the road, like re-triggering functions asynchronously to keep up with the stream and proper batch sizes, we finally managed to get it running in a reliable way and are very happy with this solution today.

#ServerlessTaskProcessing #GeneralAnalytics #RealTimeDataProcessing #BigDataAsAService

See more
Amazon Kinesis Firehose logo

Amazon Kinesis Firehose

242
181
0
Simple and Scalable Data Ingestion
242
181
+ 1
0
PROS OF AMAZON KINESIS FIREHOSE
    Be the first to leave a pro
    CONS OF AMAZON KINESIS FIREHOSE
      Be the first to leave a con

      related Amazon Kinesis Firehose posts

      Praveen Mooli
      Engineering Manager at Taylor and Francis · | 18 upvotes · 3.1M views

      We are in the process of building a modern content platform to deliver our content through various channels. We decided to go with Microservices architecture as we wanted scale. Microservice architecture style is an approach to developing an application as a suite of small independently deployable services built around specific business capabilities. You can gain modularity, extensive parallelism and cost-effective scaling by deploying services across many distributed servers. Microservices modularity facilitates independent updates/deployments, and helps to avoid single point of failure, which can help prevent large-scale outages. We also decided to use Event Driven Architecture pattern which is a popular distributed asynchronous architecture pattern used to produce highly scalable applications. The event-driven architecture is made up of highly decoupled, single-purpose event processing components that asynchronously receive and process events.

      To build our #Backend capabilities we decided to use the following: 1. #Microservices - Java with Spring Boot , Node.js with ExpressJS and Python with Flask 2. #Eventsourcingframework - Amazon Kinesis , Amazon Kinesis Firehose , Amazon SNS , Amazon SQS, AWS Lambda 3. #Data - Amazon RDS , Amazon DynamoDB , Amazon S3 , MongoDB Atlas

      To build #Webapps we decided to use Angular 2 with RxJS

      #Devops - GitHub , Travis CI , Terraform , Docker , Serverless

      See more
      Firehose.io logo

      Firehose.io

      2
      15
      7
      Build realtime Ruby web applications
      2
      15
      + 1
      7
      PROS OF FIREHOSE.IO
      • 2
        RESTful
      • 2
        Rails gem
      • 2
        Works with ActiveRecord
      • 1
        Clean way to build real-time web applications
      CONS OF FIREHOSE.IO
        Be the first to leave a con

        related Firehose.io posts

        Apache Storm logo

        Apache Storm

        200
        279
        24
        Distributed and fault-tolerant realtime computation
        200
        279
        + 1
        24
        PROS OF APACHE STORM
        • 10
          Flexible
        • 6
          Easy setup
        • 3
          Clojure
        • 3
          Event Processing
        • 2
          Real Time
        CONS OF APACHE STORM
          Be the first to leave a con

          related Apache Storm posts

          Marc Bollinger
          Infra & Data Eng Manager at Thumbtack · | 5 upvotes · 1.6M views

          Lumosity is home to the world's largest cognitive training database, a responsibility we take seriously. For most of the company's history, our analysis of user behavior and training data has been powered by an event stream--first a simple Node.js pub/sub app, then a heavyweight Ruby app with stronger durability. Both supported decent throughput and latency, but they lacked some major features supported by existing open-source alternatives: replaying existing messages (also lacking in most message queue-based solutions), scaling out many different readers for the same stream, the ability to leverage existing solutions for reading and writing, and possibly most importantly: the ability to hire someone externally who already had expertise.

          We ultimately migrated to Kafka in early- to mid-2016, citing both industry trends in companies we'd talked to with similar durability and throughput needs, the extremely strong documentation and community. We pored over Kyle Kingsbury's Jepsen post (https://aphyr.com/posts/293-jepsen-Kafka), as well as Jay Kreps' follow-up (http://blog.empathybox.com/post/62279088548/a-few-notes-on-kafka-and-jepsen), talked at length with Confluent folks and community members, and still wound up running parallel systems for quite a long time, but ultimately, we've been very, very happy. Understanding the internals and proper levers takes some commitment, but it's taken very little maintenance once configured. Since then, the Confluent Platform community has grown and grown; we've gone from doing most development using custom Scala consumers and producers to being 60/40 Kafka Streams/Connects.

          We originally looked into Storm / Heron , and we'd moved on from Redis pub/sub. Heron looks great, but we already had a programming model across services that was more akin to consuming a message consumers than required a topology of bolts, etc. Heron also had just come out while we were starting to migrate things, and the community momentum and direction of Kafka felt more substantial than the older Storm. If we were to start the process over again today, we might check out Pulsar , although the ecosystem is much younger.

          To find out more, read our 2017 engineering blog post about the migration!

          See more
          Confluent logo

          Confluent

          301
          218
          14
          A stream data platform to help companies harness their high volume real-time data streams
          301
          218
          + 1
          14
          PROS OF CONFLUENT
          • 4
            Free for casual use
          • 3
            No hypercloud lock-in
          • 3
            Dashboard for kafka insight
          • 2
            Easily scalable
          • 2
            Zero devops
          CONS OF CONFLUENT
          • 1
            Proprietary

          related Confluent posts

          I have recently started using Confluent/Kafka cloud. We want to do some stream processing. As I was going through Kafka I came across Kafka Streams and KSQL. Both seem to be A good fit for stream processing. But I could not understand which one should be used and one has any advantage over another. We will be using Confluent/Kafka Managed Cloud Instance. In near future, our Producers and Consumers are running on premise and we will be interacting with Confluent Cloud.

          Also, Confluent Cloud Kafka has a primitive interface; is there any better UI interface to manage Kafka Cloud Cluster?

          See more
          Google Cloud Dataflow logo

          Google Cloud Dataflow

          216
          466
          18
          A fully-managed cloud service and programming model for batch and streaming big data processing.
          216
          466
          + 1
          18
          PROS OF GOOGLE CLOUD DATAFLOW
          • 7
            Unified batch and stream processing
          • 5
            Autoscaling
          • 4
            Fully managed
          • 2
            Throughput Transparency
          CONS OF GOOGLE CLOUD DATAFLOW
            Be the first to leave a con

            related Google Cloud Dataflow posts

            I am currently launching 50 pipelines in a Google Cloud Data Fusion version 6.4 instance. These pipelines are launched daily and transport data from a MySQLServer database to Google BigQuery. The cost is becoming very high and I was wondering if the costs with Google Cloud Dataflow decrease for the same rows transported.

            See more

            Will Dataflow be the right replacement for AWS Glue? Are there any unforeseen exceptions like certain proprietary transformations not supported in Google Cloud Dataflow, connectors ecosystem, Data Quality & Date cleansing not supported in DataFlow. etc?

            Also, how about Google Cloud Data Fusion as a replacement? In terms of No Code/Low code .. (Since basic use cases in Glue support UI, in that case, CDF may be the right choice ).

            What would be the best choice?

            See more