Apache NiFi vs AWS Data Pipeline

Need advice about which tool to choose?Ask the StackShare community!

Apache NiFi

269
546
+ 1
62
AWS Data Pipeline

89
346
+ 1
1
Add tool
Get Advice from developers at your company using Private StackShare. Sign up for Private StackShare.
Learn More
Pros of Apache NiFi
Pros of AWS Data Pipeline
  • 15
    Visual Data Flows using Directed Acyclic Graphs (DAGs)
  • 8
    Free (Open Source)
  • 7
    Simple-to-use
  • 5
    Reactive with back-pressure
  • 5
    Scalable horizontally as well as vertically
  • 4
    Fast prototyping
  • 3
    Bi-directional channels
  • 2
    Data provenance
  • 2
    Built-in graphical user interface
  • 2
    End-to-end security between all nodes
  • 2
    Can handle messages up to gigabytes in size
  • 1
    Hbase support
  • 1
    Kudu support
  • 1
    Hive support
  • 1
    Slack integration
  • 1
    Support for custom Processor in Java
  • 1
    Lot of articles
  • 1
    Lots of documentation
  • 1
    Easy to create DAG and execute it

Sign up to add or upvote prosMake informed product decisions

Cons of Apache NiFi
Cons of AWS Data Pipeline
  • 2
    HA support is not full fledge
  • 2
    Memory-intensive
    Be the first to leave a con

    Sign up to add or upvote consMake informed product decisions

    What is Apache NiFi?

    An easy to use, powerful, and reliable system to process and distribute data. It supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic.

    What is AWS Data Pipeline?

    AWS Data Pipeline is a web service that provides a simple management system for data-driven workflows. Using AWS Data Pipeline, you define a pipeline composed of the “data sources” that contain your data, the “activities” or business logic such as EMR jobs or SQL queries, and the “schedule” on which your business logic executes. For example, you could define a job that, every hour, runs an Amazon Elastic MapReduce (Amazon EMR)–based analysis on that hour’s Amazon Simple Storage Service (Amazon S3) log data, loads the results into a relational database for future lookup, and then automatically sends you a daily summary email.

    Need advice about which tool to choose?Ask the StackShare community!

    What companies use Apache NiFi?
    What companies use AWS Data Pipeline?
    See which teams inside your own company are using Apache NiFi or AWS Data Pipeline.
    Sign up for Private StackShareLearn More

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with Apache NiFi?
    What tools integrate with AWS Data Pipeline?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    What are some alternatives to Apache NiFi and AWS Data Pipeline?
    Kafka
    Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design.
    Apache Storm
    Apache Storm is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate.
    Logstash
    Logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching). If you store them in Elasticsearch, you can view and analyze them with Kibana.
    Apache Camel
    An open source Java framework that focuses on making integration easier and more accessible to developers.
    Apache Spark
    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.
    See all alternatives